00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1745 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3006 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.051 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.052 The recommended git tool is: git 00:00:00.052 using credential 00000000-0000-0000-0000-000000000002 00:00:00.053 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.082 Fetching changes from the remote Git repository 00:00:00.084 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.144 Using shallow fetch with depth 1 00:00:00.144 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.144 > git --version # timeout=10 00:00:00.195 > git --version # 'git version 2.39.2' 00:00:00.195 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.196 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.196 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:01:03.332 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:03.343 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:03.355 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:01:03.355 > git config core.sparsecheckout # timeout=10 00:01:03.366 > git read-tree -mu HEAD # timeout=10 00:01:03.381 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=5 00:01:03.400 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:01:03.400 > git rev-list --no-walk 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:01:03.506 [Pipeline] Start of Pipeline 00:01:03.522 [Pipeline] library 00:01:03.524 Loading library shm_lib@master 00:01:03.524 Library shm_lib@master is cached. Copying from home. 00:01:03.542 [Pipeline] node 00:01:03.552 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.554 [Pipeline] { 00:01:03.567 [Pipeline] catchError 00:01:03.568 [Pipeline] { 00:01:03.583 [Pipeline] wrap 00:01:03.593 [Pipeline] { 00:01:03.601 [Pipeline] stage 00:01:03.604 [Pipeline] { (Prologue) 00:01:03.770 [Pipeline] sh 00:01:04.059 + logger -p user.info -t JENKINS-CI 00:01:04.078 [Pipeline] echo 00:01:04.080 Node: WFP8 00:01:04.087 [Pipeline] sh 00:01:04.384 [Pipeline] setCustomBuildProperty 00:01:04.396 [Pipeline] echo 00:01:04.398 Cleanup processes 00:01:04.403 [Pipeline] sh 00:01:04.688 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.688 50488 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.702 [Pipeline] sh 00:01:04.987 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.987 ++ grep -v 'sudo pgrep' 00:01:04.987 ++ awk '{print $1}' 00:01:04.987 + sudo kill -9 00:01:04.987 + true 00:01:05.005 [Pipeline] cleanWs 00:01:05.017 [WS-CLEANUP] Deleting project workspace... 00:01:05.017 [WS-CLEANUP] Deferred wipeout is used... 00:01:05.024 [WS-CLEANUP] done 00:01:05.028 [Pipeline] setCustomBuildProperty 00:01:05.045 [Pipeline] sh 00:01:05.331 + sudo git config --global --replace-all safe.directory '*' 00:01:05.409 [Pipeline] nodesByLabel 00:01:05.411 Could not find any nodes with 'sorcerer' label 00:01:05.416 [Pipeline] retry 00:01:05.418 [Pipeline] { 00:01:05.442 [Pipeline] checkout 00:01:05.449 The recommended git tool is: git 00:01:05.460 using credential 00000000-0000-0000-0000-000000000002 00:01:05.465 Cloning the remote Git repository 00:01:05.468 Honoring refspec on initial clone 00:01:05.472 Cloning repository https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:01:05.473 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp # timeout=10 00:01:05.479 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:01:05.479 > git --version # timeout=10 00:01:05.483 > git --version # 'git version 2.43.0' 00:01:05.484 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:05.484 Setting http proxy: proxy-dmz.intel.com:911 00:01:05.485 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=10 00:03:32.748 Avoid second fetch 00:03:32.769 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:03:32.858 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:03:32.864 [Pipeline] } 00:03:32.884 [Pipeline] // retry 00:03:32.896 [Pipeline] nodesByLabel 00:03:32.898 Could not find any nodes with 'sorcerer' label 00:03:32.903 [Pipeline] retry 00:03:32.905 [Pipeline] { 00:03:32.926 [Pipeline] checkout 00:03:32.933 The recommended git tool is: NONE 00:03:32.943 using credential 00000000-0000-0000-0000-000000000002 00:03:32.948 Cloning the remote Git repository 00:03:32.952 Honoring refspec on initial clone 00:03:32.955 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:03:32.956 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk # timeout=10 00:03:32.962 Using reference repository: /var/ci_repos/spdk_multi 00:03:32.962 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:03:32.962 > git --version # timeout=10 00:03:32.966 > git --version # 'git version 2.43.0' 00:03:32.966 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:32.967 Setting http proxy: proxy-dmz.intel.com:911 00:03:32.967 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/heads/v24.01.x +refs/heads/master:refs/remotes/origin/master # timeout=10 00:03:32.733 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:03:32.738 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:03:32.751 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:03:32.759 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:03:32.772 > git config core.sparsecheckout # timeout=10 00:03:32.776 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:04:57.097 Avoid second fetch 00:04:57.111 Checking out Revision 36faa8c312bf9059b86e0f503d7fd6b43c1498e6 (FETCH_HEAD) 00:04:57.320 Commit message: "bdev/nvme: Fix the case that namespace was removed during reset" 00:04:57.339 First time build. Skipping changelog. 00:04:57.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:04:57.086 > git config --add remote.origin.fetch refs/heads/v24.01.x # timeout=10 00:04:57.088 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:04:57.100 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:04:57.108 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:04:57.115 > git config core.sparsecheckout # timeout=10 00:04:57.119 > git checkout -f 36faa8c312bf9059b86e0f503d7fd6b43c1498e6 # timeout=10 00:04:57.328 > git rev-list --no-walk 27395820e570bad3910444111c4d7d52b3ea17ad # timeout=10 00:04:57.361 > git remote # timeout=10 00:04:57.365 > git submodule init # timeout=10 00:04:57.426 > git submodule sync # timeout=10 00:04:57.489 > git config --get remote.origin.url # timeout=10 00:04:57.499 > git submodule init # timeout=10 00:04:57.561 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:04:57.565 > git config --get submodule.dpdk.url # timeout=10 00:04:57.570 > git remote # timeout=10 00:04:57.574 > git config --get remote.origin.url # timeout=10 00:04:57.578 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:04:57.582 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:04:57.586 > git remote # timeout=10 00:04:57.590 > git config --get remote.origin.url # timeout=10 00:04:57.594 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:04:57.598 > git config --get submodule.isa-l.url # timeout=10 00:04:57.603 > git remote # timeout=10 00:04:57.607 > git config --get remote.origin.url # timeout=10 00:04:57.611 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:04:57.615 > git config --get submodule.ocf.url # timeout=10 00:04:57.618 > git remote # timeout=10 00:04:57.620 > git config --get remote.origin.url # timeout=10 00:04:57.624 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:04:57.628 > git config --get submodule.libvfio-user.url # timeout=10 00:04:57.630 > git remote # timeout=10 00:04:57.635 > git config --get remote.origin.url # timeout=10 00:04:57.639 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:04:57.643 > git config --get submodule.xnvme.url # timeout=10 00:04:57.645 > git remote # timeout=10 00:04:57.650 > git config --get remote.origin.url # timeout=10 00:04:57.654 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:04:57.658 > git config --get submodule.isa-l-crypto.url # timeout=10 00:04:57.662 > git remote # timeout=10 00:04:57.666 > git config --get remote.origin.url # timeout=10 00:04:57.670 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:04:57.677 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:57.677 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:57.677 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:57.677 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:57.677 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:57.677 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:57.677 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:04:57.678 Setting http proxy: proxy-dmz.intel.com:911 00:04:57.678 Setting http proxy: proxy-dmz.intel.com:911 00:04:57.678 Setting http proxy: proxy-dmz.intel.com:911 00:04:57.678 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:04:57.678 Setting http proxy: proxy-dmz.intel.com:911 00:04:57.678 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:04:57.678 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:04:57.678 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:04:57.678 Setting http proxy: proxy-dmz.intel.com:911 00:04:57.678 Setting http proxy: proxy-dmz.intel.com:911 00:04:57.678 Setting http proxy: proxy-dmz.intel.com:911 00:04:57.678 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:04:57.678 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:04:57.678 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:05:08.924 [Pipeline] } 00:05:08.946 [Pipeline] // retry 00:05:08.955 [Pipeline] sh 00:05:09.247 + git -C spdk log --oneline -n5 00:05:09.247 36faa8c312b bdev/nvme: Fix the case that namespace was removed during reset 00:05:09.247 e2cb5a5eed9 bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:05:09.247 4b134b4abdb bdev/nvme: Delay callbacks when the next operation is a failover 00:05:09.247 d2ea4ecb14a llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:05:09.247 3b33f433344 test/nvme/cuse: Fix typo 00:05:09.296 [Pipeline] } 00:05:09.307 [Pipeline] // stage 00:05:09.314 [Pipeline] stage 00:05:09.315 [Pipeline] { (Prepare) 00:05:09.327 [Pipeline] writeFile 00:05:09.338 [Pipeline] sh 00:05:09.617 + logger -p user.info -t JENKINS-CI 00:05:09.630 [Pipeline] sh 00:05:09.912 + logger -p user.info -t JENKINS-CI 00:05:09.923 [Pipeline] sh 00:05:10.207 + cat autorun-spdk.conf 00:05:10.207 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:10.207 SPDK_TEST_NVMF=1 00:05:10.207 SPDK_TEST_NVME_CLI=1 00:05:10.207 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:10.207 SPDK_TEST_NVMF_NICS=e810 00:05:10.207 SPDK_RUN_UBSAN=1 00:05:10.207 NET_TYPE=phy 00:05:10.217 RUN_NIGHTLY=1 00:05:10.222 [Pipeline] readFile 00:05:10.249 [Pipeline] withEnv 00:05:10.251 [Pipeline] { 00:05:10.265 [Pipeline] sh 00:05:10.548 + set -ex 00:05:10.548 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:10.548 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:10.548 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:10.548 ++ SPDK_TEST_NVMF=1 00:05:10.548 ++ SPDK_TEST_NVME_CLI=1 00:05:10.548 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:10.548 ++ SPDK_TEST_NVMF_NICS=e810 00:05:10.548 ++ SPDK_RUN_UBSAN=1 00:05:10.548 ++ NET_TYPE=phy 00:05:10.548 ++ RUN_NIGHTLY=1 00:05:10.548 + case $SPDK_TEST_NVMF_NICS in 00:05:10.548 + DRIVERS=ice 00:05:10.548 + [[ tcp == \r\d\m\a ]] 00:05:10.548 + [[ -n ice ]] 00:05:10.548 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:10.548 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:13.843 rmmod: ERROR: Module irdma is not currently loaded 00:05:13.843 rmmod: ERROR: Module i40iw is not currently loaded 00:05:13.843 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:13.843 + true 00:05:13.843 + for D in $DRIVERS 00:05:13.843 + sudo modprobe ice 00:05:13.843 + exit 0 00:05:13.853 [Pipeline] } 00:05:13.874 [Pipeline] // withEnv 00:05:13.880 [Pipeline] } 00:05:13.902 [Pipeline] // stage 00:05:13.926 [Pipeline] catchError 00:05:13.928 [Pipeline] { 00:05:13.975 [Pipeline] timeout 00:05:13.975 Timeout set to expire in 40 min 00:05:13.977 [Pipeline] { 00:05:13.990 [Pipeline] stage 00:05:13.992 [Pipeline] { (Tests) 00:05:14.005 [Pipeline] sh 00:05:14.284 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:14.284 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:14.284 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:14.284 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:14.284 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.284 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:14.284 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:14.284 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:14.284 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:14.284 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:14.284 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:14.284 + source /etc/os-release 00:05:14.284 ++ NAME='Fedora Linux' 00:05:14.284 ++ VERSION='38 (Cloud Edition)' 00:05:14.284 ++ ID=fedora 00:05:14.284 ++ VERSION_ID=38 00:05:14.284 ++ VERSION_CODENAME= 00:05:14.284 ++ PLATFORM_ID=platform:f38 00:05:14.284 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:05:14.284 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:14.284 ++ LOGO=fedora-logo-icon 00:05:14.284 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:05:14.284 ++ HOME_URL=https://fedoraproject.org/ 00:05:14.284 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:05:14.284 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:14.284 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:14.284 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:14.284 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:05:14.284 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:14.284 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:05:14.284 ++ SUPPORT_END=2024-05-14 00:05:14.284 ++ VARIANT='Cloud Edition' 00:05:14.284 ++ VARIANT_ID=cloud 00:05:14.284 + uname -a 00:05:14.284 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:05:14.284 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:16.822 Hugepages 00:05:16.822 node hugesize free / total 00:05:16.822 node0 1048576kB 0 / 0 00:05:16.822 node0 2048kB 2048 / 2048 00:05:16.822 node1 1048576kB 0 / 0 00:05:16.822 node1 2048kB 0 / 0 00:05:16.822 00:05:16.822 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:16.822 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:16.822 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:16.822 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:16.822 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:16.822 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:16.822 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:16.822 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:16.822 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:16.822 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:16.822 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:16.822 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:16.822 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:16.822 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:16.822 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:16.822 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:16.822 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:16.822 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:16.822 + rm -f /tmp/spdk-ld-path 00:05:16.822 + source autorun-spdk.conf 00:05:16.822 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:16.822 ++ SPDK_TEST_NVMF=1 00:05:16.822 ++ SPDK_TEST_NVME_CLI=1 00:05:16.822 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:16.822 ++ SPDK_TEST_NVMF_NICS=e810 00:05:16.822 ++ SPDK_RUN_UBSAN=1 00:05:16.822 ++ NET_TYPE=phy 00:05:16.822 ++ RUN_NIGHTLY=1 00:05:16.822 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:16.822 + [[ -n '' ]] 00:05:16.822 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:16.822 + for M in /var/spdk/build-*-manifest.txt 00:05:16.822 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:16.822 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:16.822 + for M in /var/spdk/build-*-manifest.txt 00:05:16.822 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:16.822 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:16.822 ++ uname 00:05:16.822 + [[ Linux == \L\i\n\u\x ]] 00:05:16.822 + sudo dmesg -T 00:05:16.822 + sudo dmesg --clear 00:05:16.822 + dmesg_pid=53173 00:05:16.822 + [[ Fedora Linux == FreeBSD ]] 00:05:16.822 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:16.822 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:16.822 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:16.822 + sudo dmesg -Tw 00:05:16.822 + [[ -x /usr/src/fio-static/fio ]] 00:05:16.822 + export FIO_BIN=/usr/src/fio-static/fio 00:05:16.822 + FIO_BIN=/usr/src/fio-static/fio 00:05:16.822 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:16.822 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:16.822 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:16.822 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:16.822 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:16.822 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:16.822 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:16.822 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:16.822 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:16.822 Test configuration: 00:05:16.822 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:16.822 SPDK_TEST_NVMF=1 00:05:16.822 SPDK_TEST_NVME_CLI=1 00:05:16.822 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:16.822 SPDK_TEST_NVMF_NICS=e810 00:05:16.822 SPDK_RUN_UBSAN=1 00:05:16.822 NET_TYPE=phy 00:05:16.822 RUN_NIGHTLY=1 10:00:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:16.822 10:00:30 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:16.822 10:00:30 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.822 10:00:30 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.822 10:00:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.823 10:00:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.823 10:00:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.823 10:00:30 -- paths/export.sh@5 -- $ export PATH 00:05:16.823 10:00:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.823 10:00:30 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:16.823 10:00:30 -- common/autobuild_common.sh@435 -- $ date +%s 00:05:16.823 10:00:30 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713945630.XXXXXX 00:05:16.823 10:00:30 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713945630.HTMBf3 00:05:16.823 10:00:30 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:05:16.823 10:00:30 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:05:16.823 10:00:30 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:16.823 10:00:30 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:16.823 10:00:30 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:16.823 10:00:30 -- common/autobuild_common.sh@451 -- $ get_config_params 00:05:16.823 10:00:30 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:05:16.823 10:00:30 -- common/autotest_common.sh@10 -- $ set +x 00:05:16.823 10:00:30 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:05:16.823 10:00:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:16.823 10:00:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:16.823 10:00:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:16.823 10:00:30 -- spdk/autobuild.sh@16 -- $ date -u 00:05:16.823 Wed Apr 24 08:00:30 AM UTC 2024 00:05:16.823 10:00:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:17.082 LTS-24-g36faa8c312b 00:05:17.082 10:00:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:17.082 10:00:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:17.082 10:00:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:17.082 10:00:30 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:05:17.082 10:00:30 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:05:17.082 10:00:30 -- common/autotest_common.sh@10 -- $ set +x 00:05:17.082 ************************************ 00:05:17.082 START TEST ubsan 00:05:17.082 ************************************ 00:05:17.082 10:00:30 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:05:17.082 using ubsan 00:05:17.082 00:05:17.082 real 0m0.000s 00:05:17.082 user 0m0.000s 00:05:17.082 sys 0m0.000s 00:05:17.082 10:00:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:05:17.082 10:00:30 -- common/autotest_common.sh@10 -- $ set +x 00:05:17.082 ************************************ 00:05:17.082 END TEST ubsan 00:05:17.082 ************************************ 00:05:17.082 10:00:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:17.082 10:00:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:17.082 10:00:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:17.082 10:00:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:17.082 10:00:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:17.082 10:00:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:17.082 10:00:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:17.082 10:00:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:17.082 10:00:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:05:17.082 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:17.082 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:17.341 Using 'verbs' RDMA provider 00:05:30.121 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:05:40.104 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:05:40.363 Creating mk/config.mk...done. 00:05:40.363 Creating mk/cc.flags.mk...done. 00:05:40.363 Type 'make' to build. 00:05:40.363 10:00:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:05:40.363 10:00:53 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:05:40.363 10:00:53 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:05:40.363 10:00:53 -- common/autotest_common.sh@10 -- $ set +x 00:05:40.363 ************************************ 00:05:40.363 START TEST make 00:05:40.363 ************************************ 00:05:40.363 10:00:53 -- common/autotest_common.sh@1104 -- $ make -j96 00:05:40.622 make[1]: Nothing to be done for 'all'. 00:05:48.819 The Meson build system 00:05:48.819 Version: 1.3.1 00:05:48.819 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:48.819 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:48.819 Build type: native build 00:05:48.819 Program cat found: YES (/usr/bin/cat) 00:05:48.819 Project name: DPDK 00:05:48.819 Project version: 23.11.0 00:05:48.819 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:48.819 C linker for the host machine: cc ld.bfd 2.39-16 00:05:48.819 Host machine cpu family: x86_64 00:05:48.819 Host machine cpu: x86_64 00:05:48.819 Message: ## Building in Developer Mode ## 00:05:48.819 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:48.819 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:48.819 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:48.819 Program python3 found: YES (/usr/bin/python3) 00:05:48.819 Program cat found: YES (/usr/bin/cat) 00:05:48.819 Compiler for C supports arguments -march=native: YES 00:05:48.819 Checking for size of "void *" : 8 00:05:48.819 Checking for size of "void *" : 8 (cached) 00:05:48.819 Library m found: YES 00:05:48.819 Library numa found: YES 00:05:48.819 Has header "numaif.h" : YES 00:05:48.819 Library fdt found: NO 00:05:48.819 Library execinfo found: NO 00:05:48.819 Has header "execinfo.h" : YES 00:05:48.819 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:48.819 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:48.819 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:48.819 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:48.819 Run-time dependency openssl found: YES 3.0.9 00:05:48.820 Run-time dependency libpcap found: YES 1.10.4 00:05:48.820 Has header "pcap.h" with dependency libpcap: YES 00:05:48.820 Compiler for C supports arguments -Wcast-qual: YES 00:05:48.820 Compiler for C supports arguments -Wdeprecated: YES 00:05:48.820 Compiler for C supports arguments -Wformat: YES 00:05:48.820 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:48.820 Compiler for C supports arguments -Wformat-security: NO 00:05:48.820 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:48.820 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:48.820 Compiler for C supports arguments -Wnested-externs: YES 00:05:48.820 Compiler for C supports arguments -Wold-style-definition: YES 00:05:48.820 Compiler for C supports arguments -Wpointer-arith: YES 00:05:48.820 Compiler for C supports arguments -Wsign-compare: YES 00:05:48.820 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:48.820 Compiler for C supports arguments -Wundef: YES 00:05:48.820 Compiler for C supports arguments -Wwrite-strings: YES 00:05:48.820 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:48.820 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:48.820 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:48.820 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:48.820 Program objdump found: YES (/usr/bin/objdump) 00:05:48.820 Compiler for C supports arguments -mavx512f: YES 00:05:48.820 Checking if "AVX512 checking" compiles: YES 00:05:48.820 Fetching value of define "__SSE4_2__" : 1 00:05:48.820 Fetching value of define "__AES__" : 1 00:05:48.820 Fetching value of define "__AVX__" : 1 00:05:48.820 Fetching value of define "__AVX2__" : 1 00:05:48.820 Fetching value of define "__AVX512BW__" : 1 00:05:48.820 Fetching value of define "__AVX512CD__" : 1 00:05:48.820 Fetching value of define "__AVX512DQ__" : 1 00:05:48.820 Fetching value of define "__AVX512F__" : 1 00:05:48.820 Fetching value of define "__AVX512VL__" : 1 00:05:48.820 Fetching value of define "__PCLMUL__" : 1 00:05:48.820 Fetching value of define "__RDRND__" : 1 00:05:48.820 Fetching value of define "__RDSEED__" : 1 00:05:48.820 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:48.820 Fetching value of define "__znver1__" : (undefined) 00:05:48.820 Fetching value of define "__znver2__" : (undefined) 00:05:48.820 Fetching value of define "__znver3__" : (undefined) 00:05:48.820 Fetching value of define "__znver4__" : (undefined) 00:05:48.820 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:48.820 Message: lib/log: Defining dependency "log" 00:05:48.820 Message: lib/kvargs: Defining dependency "kvargs" 00:05:48.820 Message: lib/telemetry: Defining dependency "telemetry" 00:05:48.820 Checking for function "getentropy" : NO 00:05:48.820 Message: lib/eal: Defining dependency "eal" 00:05:48.820 Message: lib/ring: Defining dependency "ring" 00:05:48.820 Message: lib/rcu: Defining dependency "rcu" 00:05:48.820 Message: lib/mempool: Defining dependency "mempool" 00:05:48.820 Message: lib/mbuf: Defining dependency "mbuf" 00:05:48.820 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:48.820 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:48.820 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:48.820 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:48.820 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:48.820 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:48.820 Compiler for C supports arguments -mpclmul: YES 00:05:48.820 Compiler for C supports arguments -maes: YES 00:05:48.820 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:48.820 Compiler for C supports arguments -mavx512bw: YES 00:05:48.820 Compiler for C supports arguments -mavx512dq: YES 00:05:48.820 Compiler for C supports arguments -mavx512vl: YES 00:05:48.820 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:48.820 Compiler for C supports arguments -mavx2: YES 00:05:48.820 Compiler for C supports arguments -mavx: YES 00:05:48.820 Message: lib/net: Defining dependency "net" 00:05:48.820 Message: lib/meter: Defining dependency "meter" 00:05:48.820 Message: lib/ethdev: Defining dependency "ethdev" 00:05:48.820 Message: lib/pci: Defining dependency "pci" 00:05:48.820 Message: lib/cmdline: Defining dependency "cmdline" 00:05:48.820 Message: lib/hash: Defining dependency "hash" 00:05:48.820 Message: lib/timer: Defining dependency "timer" 00:05:48.820 Message: lib/compressdev: Defining dependency "compressdev" 00:05:48.820 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:48.820 Message: lib/dmadev: Defining dependency "dmadev" 00:05:48.820 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:48.820 Message: lib/power: Defining dependency "power" 00:05:48.820 Message: lib/reorder: Defining dependency "reorder" 00:05:48.820 Message: lib/security: Defining dependency "security" 00:05:48.820 Has header "linux/userfaultfd.h" : YES 00:05:48.820 Has header "linux/vduse.h" : YES 00:05:48.820 Message: lib/vhost: Defining dependency "vhost" 00:05:48.820 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:48.820 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:48.820 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:48.820 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:48.820 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:48.820 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:48.820 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:48.820 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:48.820 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:48.820 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:48.820 Program doxygen found: YES (/usr/bin/doxygen) 00:05:48.820 Configuring doxy-api-html.conf using configuration 00:05:48.820 Configuring doxy-api-man.conf using configuration 00:05:48.820 Program mandb found: YES (/usr/bin/mandb) 00:05:48.820 Program sphinx-build found: NO 00:05:48.820 Configuring rte_build_config.h using configuration 00:05:48.820 Message: 00:05:48.820 ================= 00:05:48.820 Applications Enabled 00:05:48.820 ================= 00:05:48.820 00:05:48.820 apps: 00:05:48.820 00:05:48.820 00:05:48.820 Message: 00:05:48.820 ================= 00:05:48.820 Libraries Enabled 00:05:48.820 ================= 00:05:48.820 00:05:48.820 libs: 00:05:48.820 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:48.820 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:48.820 cryptodev, dmadev, power, reorder, security, vhost, 00:05:48.820 00:05:48.820 Message: 00:05:48.820 =============== 00:05:48.820 Drivers Enabled 00:05:48.820 =============== 00:05:48.820 00:05:48.820 common: 00:05:48.820 00:05:48.820 bus: 00:05:48.820 pci, vdev, 00:05:48.820 mempool: 00:05:48.820 ring, 00:05:48.820 dma: 00:05:48.820 00:05:48.820 net: 00:05:48.820 00:05:48.820 crypto: 00:05:48.820 00:05:48.820 compress: 00:05:48.820 00:05:48.820 vdpa: 00:05:48.820 00:05:48.820 00:05:48.820 Message: 00:05:48.820 ================= 00:05:48.820 Content Skipped 00:05:48.820 ================= 00:05:48.820 00:05:48.820 apps: 00:05:48.820 dumpcap: explicitly disabled via build config 00:05:48.820 graph: explicitly disabled via build config 00:05:48.820 pdump: explicitly disabled via build config 00:05:48.820 proc-info: explicitly disabled via build config 00:05:48.820 test-acl: explicitly disabled via build config 00:05:48.820 test-bbdev: explicitly disabled via build config 00:05:48.820 test-cmdline: explicitly disabled via build config 00:05:48.820 test-compress-perf: explicitly disabled via build config 00:05:48.820 test-crypto-perf: explicitly disabled via build config 00:05:48.820 test-dma-perf: explicitly disabled via build config 00:05:48.820 test-eventdev: explicitly disabled via build config 00:05:48.820 test-fib: explicitly disabled via build config 00:05:48.820 test-flow-perf: explicitly disabled via build config 00:05:48.820 test-gpudev: explicitly disabled via build config 00:05:48.820 test-mldev: explicitly disabled via build config 00:05:48.820 test-pipeline: explicitly disabled via build config 00:05:48.820 test-pmd: explicitly disabled via build config 00:05:48.820 test-regex: explicitly disabled via build config 00:05:48.820 test-sad: explicitly disabled via build config 00:05:48.820 test-security-perf: explicitly disabled via build config 00:05:48.820 00:05:48.820 libs: 00:05:48.820 metrics: explicitly disabled via build config 00:05:48.820 acl: explicitly disabled via build config 00:05:48.820 bbdev: explicitly disabled via build config 00:05:48.820 bitratestats: explicitly disabled via build config 00:05:48.820 bpf: explicitly disabled via build config 00:05:48.820 cfgfile: explicitly disabled via build config 00:05:48.820 distributor: explicitly disabled via build config 00:05:48.820 efd: explicitly disabled via build config 00:05:48.820 eventdev: explicitly disabled via build config 00:05:48.820 dispatcher: explicitly disabled via build config 00:05:48.820 gpudev: explicitly disabled via build config 00:05:48.820 gro: explicitly disabled via build config 00:05:48.820 gso: explicitly disabled via build config 00:05:48.820 ip_frag: explicitly disabled via build config 00:05:48.820 jobstats: explicitly disabled via build config 00:05:48.820 latencystats: explicitly disabled via build config 00:05:48.820 lpm: explicitly disabled via build config 00:05:48.820 member: explicitly disabled via build config 00:05:48.820 pcapng: explicitly disabled via build config 00:05:48.820 rawdev: explicitly disabled via build config 00:05:48.820 regexdev: explicitly disabled via build config 00:05:48.820 mldev: explicitly disabled via build config 00:05:48.820 rib: explicitly disabled via build config 00:05:48.820 sched: explicitly disabled via build config 00:05:48.820 stack: explicitly disabled via build config 00:05:48.820 ipsec: explicitly disabled via build config 00:05:48.820 pdcp: explicitly disabled via build config 00:05:48.820 fib: explicitly disabled via build config 00:05:48.820 port: explicitly disabled via build config 00:05:48.820 pdump: explicitly disabled via build config 00:05:48.820 table: explicitly disabled via build config 00:05:48.820 pipeline: explicitly disabled via build config 00:05:48.820 graph: explicitly disabled via build config 00:05:48.821 node: explicitly disabled via build config 00:05:48.821 00:05:48.821 drivers: 00:05:48.821 common/cpt: not in enabled drivers build config 00:05:48.821 common/dpaax: not in enabled drivers build config 00:05:48.821 common/iavf: not in enabled drivers build config 00:05:48.821 common/idpf: not in enabled drivers build config 00:05:48.821 common/mvep: not in enabled drivers build config 00:05:48.821 common/octeontx: not in enabled drivers build config 00:05:48.821 bus/auxiliary: not in enabled drivers build config 00:05:48.821 bus/cdx: not in enabled drivers build config 00:05:48.821 bus/dpaa: not in enabled drivers build config 00:05:48.821 bus/fslmc: not in enabled drivers build config 00:05:48.821 bus/ifpga: not in enabled drivers build config 00:05:48.821 bus/platform: not in enabled drivers build config 00:05:48.821 bus/vmbus: not in enabled drivers build config 00:05:48.821 common/cnxk: not in enabled drivers build config 00:05:48.821 common/mlx5: not in enabled drivers build config 00:05:48.821 common/nfp: not in enabled drivers build config 00:05:48.821 common/qat: not in enabled drivers build config 00:05:48.821 common/sfc_efx: not in enabled drivers build config 00:05:48.821 mempool/bucket: not in enabled drivers build config 00:05:48.821 mempool/cnxk: not in enabled drivers build config 00:05:48.821 mempool/dpaa: not in enabled drivers build config 00:05:48.821 mempool/dpaa2: not in enabled drivers build config 00:05:48.821 mempool/octeontx: not in enabled drivers build config 00:05:48.821 mempool/stack: not in enabled drivers build config 00:05:48.821 dma/cnxk: not in enabled drivers build config 00:05:48.821 dma/dpaa: not in enabled drivers build config 00:05:48.821 dma/dpaa2: not in enabled drivers build config 00:05:48.821 dma/hisilicon: not in enabled drivers build config 00:05:48.821 dma/idxd: not in enabled drivers build config 00:05:48.821 dma/ioat: not in enabled drivers build config 00:05:48.821 dma/skeleton: not in enabled drivers build config 00:05:48.821 net/af_packet: not in enabled drivers build config 00:05:48.821 net/af_xdp: not in enabled drivers build config 00:05:48.821 net/ark: not in enabled drivers build config 00:05:48.821 net/atlantic: not in enabled drivers build config 00:05:48.821 net/avp: not in enabled drivers build config 00:05:48.821 net/axgbe: not in enabled drivers build config 00:05:48.821 net/bnx2x: not in enabled drivers build config 00:05:48.821 net/bnxt: not in enabled drivers build config 00:05:48.821 net/bonding: not in enabled drivers build config 00:05:48.821 net/cnxk: not in enabled drivers build config 00:05:48.821 net/cpfl: not in enabled drivers build config 00:05:48.821 net/cxgbe: not in enabled drivers build config 00:05:48.821 net/dpaa: not in enabled drivers build config 00:05:48.821 net/dpaa2: not in enabled drivers build config 00:05:48.821 net/e1000: not in enabled drivers build config 00:05:48.821 net/ena: not in enabled drivers build config 00:05:48.821 net/enetc: not in enabled drivers build config 00:05:48.821 net/enetfec: not in enabled drivers build config 00:05:48.821 net/enic: not in enabled drivers build config 00:05:48.821 net/failsafe: not in enabled drivers build config 00:05:48.821 net/fm10k: not in enabled drivers build config 00:05:48.821 net/gve: not in enabled drivers build config 00:05:48.821 net/hinic: not in enabled drivers build config 00:05:48.821 net/hns3: not in enabled drivers build config 00:05:48.821 net/i40e: not in enabled drivers build config 00:05:48.821 net/iavf: not in enabled drivers build config 00:05:48.821 net/ice: not in enabled drivers build config 00:05:48.821 net/idpf: not in enabled drivers build config 00:05:48.821 net/igc: not in enabled drivers build config 00:05:48.821 net/ionic: not in enabled drivers build config 00:05:48.821 net/ipn3ke: not in enabled drivers build config 00:05:48.821 net/ixgbe: not in enabled drivers build config 00:05:48.821 net/mana: not in enabled drivers build config 00:05:48.821 net/memif: not in enabled drivers build config 00:05:48.821 net/mlx4: not in enabled drivers build config 00:05:48.821 net/mlx5: not in enabled drivers build config 00:05:48.821 net/mvneta: not in enabled drivers build config 00:05:48.821 net/mvpp2: not in enabled drivers build config 00:05:48.821 net/netvsc: not in enabled drivers build config 00:05:48.821 net/nfb: not in enabled drivers build config 00:05:48.821 net/nfp: not in enabled drivers build config 00:05:48.821 net/ngbe: not in enabled drivers build config 00:05:48.821 net/null: not in enabled drivers build config 00:05:48.821 net/octeontx: not in enabled drivers build config 00:05:48.821 net/octeon_ep: not in enabled drivers build config 00:05:48.821 net/pcap: not in enabled drivers build config 00:05:48.821 net/pfe: not in enabled drivers build config 00:05:48.821 net/qede: not in enabled drivers build config 00:05:48.821 net/ring: not in enabled drivers build config 00:05:48.821 net/sfc: not in enabled drivers build config 00:05:48.821 net/softnic: not in enabled drivers build config 00:05:48.821 net/tap: not in enabled drivers build config 00:05:48.821 net/thunderx: not in enabled drivers build config 00:05:48.821 net/txgbe: not in enabled drivers build config 00:05:48.821 net/vdev_netvsc: not in enabled drivers build config 00:05:48.821 net/vhost: not in enabled drivers build config 00:05:48.821 net/virtio: not in enabled drivers build config 00:05:48.821 net/vmxnet3: not in enabled drivers build config 00:05:48.821 raw/*: missing internal dependency, "rawdev" 00:05:48.821 crypto/armv8: not in enabled drivers build config 00:05:48.821 crypto/bcmfs: not in enabled drivers build config 00:05:48.821 crypto/caam_jr: not in enabled drivers build config 00:05:48.821 crypto/ccp: not in enabled drivers build config 00:05:48.821 crypto/cnxk: not in enabled drivers build config 00:05:48.821 crypto/dpaa_sec: not in enabled drivers build config 00:05:48.821 crypto/dpaa2_sec: not in enabled drivers build config 00:05:48.821 crypto/ipsec_mb: not in enabled drivers build config 00:05:48.821 crypto/mlx5: not in enabled drivers build config 00:05:48.821 crypto/mvsam: not in enabled drivers build config 00:05:48.821 crypto/nitrox: not in enabled drivers build config 00:05:48.821 crypto/null: not in enabled drivers build config 00:05:48.821 crypto/octeontx: not in enabled drivers build config 00:05:48.821 crypto/openssl: not in enabled drivers build config 00:05:48.821 crypto/scheduler: not in enabled drivers build config 00:05:48.821 crypto/uadk: not in enabled drivers build config 00:05:48.821 crypto/virtio: not in enabled drivers build config 00:05:48.821 compress/isal: not in enabled drivers build config 00:05:48.821 compress/mlx5: not in enabled drivers build config 00:05:48.821 compress/octeontx: not in enabled drivers build config 00:05:48.821 compress/zlib: not in enabled drivers build config 00:05:48.821 regex/*: missing internal dependency, "regexdev" 00:05:48.821 ml/*: missing internal dependency, "mldev" 00:05:48.821 vdpa/ifc: not in enabled drivers build config 00:05:48.821 vdpa/mlx5: not in enabled drivers build config 00:05:48.821 vdpa/nfp: not in enabled drivers build config 00:05:48.821 vdpa/sfc: not in enabled drivers build config 00:05:48.821 event/*: missing internal dependency, "eventdev" 00:05:48.821 baseband/*: missing internal dependency, "bbdev" 00:05:48.821 gpu/*: missing internal dependency, "gpudev" 00:05:48.821 00:05:48.821 00:05:48.821 Build targets in project: 85 00:05:48.821 00:05:48.821 DPDK 23.11.0 00:05:48.821 00:05:48.821 User defined options 00:05:48.821 buildtype : debug 00:05:48.821 default_library : shared 00:05:48.821 libdir : lib 00:05:48.821 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:48.821 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:05:48.821 c_link_args : 00:05:48.821 cpu_instruction_set: native 00:05:48.821 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:48.821 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:48.821 enable_docs : false 00:05:48.821 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:48.821 enable_kmods : false 00:05:48.821 tests : false 00:05:48.821 00:05:48.821 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:48.821 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:48.821 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:48.821 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:48.821 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:48.821 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:48.821 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:48.821 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:48.821 [7/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:48.821 [8/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:48.821 [9/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:48.821 [10/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:48.821 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:48.821 [12/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:48.821 [13/265] Linking static target lib/librte_kvargs.a 00:05:48.821 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:48.821 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:48.821 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:48.821 [17/265] Linking static target lib/librte_log.a 00:05:48.821 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:48.821 [19/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:48.821 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:48.821 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:49.083 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:49.083 [23/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:49.083 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:49.083 [25/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:49.083 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:49.083 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:49.083 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:49.083 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:49.083 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:49.083 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:49.083 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:49.083 [33/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:49.083 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:49.083 [35/265] Linking static target lib/librte_pci.a 00:05:49.084 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:49.084 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:49.342 [38/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:49.342 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:49.342 [40/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:49.342 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:49.342 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:49.342 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:49.342 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:49.342 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:49.342 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:49.342 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:49.342 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:49.342 [49/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:49.342 [50/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:49.342 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:49.342 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:49.342 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:49.342 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:49.342 [55/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:49.342 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:49.342 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:49.342 [58/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:49.342 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:49.342 [60/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:49.342 [61/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.342 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:49.342 [63/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:49.342 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:49.342 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:49.342 [66/265] Linking static target lib/librte_meter.a 00:05:49.342 [67/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:49.342 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:49.342 [69/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:49.342 [70/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:49.342 [71/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:49.342 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:49.342 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:49.342 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:49.342 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:49.342 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:49.342 [77/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:49.342 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:49.342 [79/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:49.342 [80/265] Linking static target lib/librte_telemetry.a 00:05:49.342 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:49.342 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:49.342 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:49.342 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:49.342 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:49.342 [86/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:49.342 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:49.342 [88/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:49.342 [89/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:49.342 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:49.342 [91/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:49.342 [92/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:49.342 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:49.342 [94/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:49.342 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:49.342 [96/265] Linking static target lib/librte_ring.a 00:05:49.342 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:49.342 [98/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:49.342 [99/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.342 [100/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:49.601 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:49.601 [102/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:49.601 [103/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:49.601 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:49.601 [105/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:49.601 [106/265] Linking static target lib/librte_cmdline.a 00:05:49.601 [107/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:49.601 [108/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:49.601 [109/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:49.601 [110/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:49.601 [111/265] Linking static target lib/librte_rcu.a 00:05:49.601 [112/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:49.601 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:49.601 [114/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:49.601 [115/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:49.601 [116/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:49.601 [117/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:49.601 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:49.601 [119/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:49.601 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:49.601 [121/265] Linking static target lib/librte_mempool.a 00:05:49.601 [122/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:49.601 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:49.601 [124/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:49.601 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:49.601 [126/265] Linking static target lib/librte_eal.a 00:05:49.601 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:49.601 [128/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:49.601 [129/265] Linking static target lib/librte_net.a 00:05:49.601 [130/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:49.601 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:49.601 [132/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:49.601 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:49.601 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:49.601 [135/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:49.601 [136/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.601 [137/265] Linking static target lib/librte_timer.a 00:05:49.601 [138/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.601 [139/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:49.601 [140/265] Linking target lib/librte_log.so.24.0 00:05:49.602 [141/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:49.602 [142/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:49.602 [143/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:49.602 [144/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:49.602 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:49.602 [146/265] Linking static target lib/librte_compressdev.a 00:05:49.602 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:49.602 [148/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.602 [149/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:49.602 [150/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:49.602 [151/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:49.861 [152/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:49.861 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:49.861 [154/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:49.861 [155/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.861 [156/265] Linking static target lib/librte_mbuf.a 00:05:49.861 [157/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:49.861 [158/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:49.861 [159/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:05:49.861 [160/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:49.861 [161/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:49.861 [162/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:49.861 [163/265] Linking target lib/librte_kvargs.so.24.0 00:05:49.861 [164/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:49.861 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:49.861 [166/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:49.861 [167/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.861 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:49.861 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:49.861 [170/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:49.861 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:49.861 [172/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.861 [173/265] Linking static target lib/librte_hash.a 00:05:49.861 [174/265] Linking target lib/librte_telemetry.so.24.0 00:05:49.861 [175/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:49.861 [176/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:49.861 [177/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:49.861 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:49.861 [179/265] Linking static target lib/librte_dmadev.a 00:05:49.861 [180/265] Linking static target lib/librte_power.a 00:05:49.861 [181/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:49.861 [182/265] Linking static target lib/librte_security.a 00:05:49.861 [183/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:05:49.861 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:49.861 [185/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:49.861 [186/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:49.861 [187/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:49.861 [188/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:50.120 [189/265] Linking static target lib/librte_reorder.a 00:05:50.120 [190/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:50.120 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:50.120 [192/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.120 [193/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:50.120 [194/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:50.120 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:50.120 [196/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:50.120 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:50.120 [198/265] Linking static target drivers/librte_bus_vdev.a 00:05:50.120 [199/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:05:50.120 [200/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:50.120 [201/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:50.120 [202/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:50.120 [203/265] Linking static target drivers/librte_mempool_ring.a 00:05:50.120 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:50.120 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:50.120 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:50.120 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:50.120 [208/265] Linking static target drivers/librte_bus_pci.a 00:05:50.379 [209/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:50.379 [210/265] Linking static target lib/librte_cryptodev.a 00:05:50.379 [211/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.379 [212/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.379 [213/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.379 [214/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.379 [215/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.379 [216/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.379 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.379 [218/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.638 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:50.638 [220/265] Linking static target lib/librte_ethdev.a 00:05:50.638 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.638 [222/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:50.638 [223/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.897 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.837 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:51.837 [226/265] Linking static target lib/librte_vhost.a 00:05:52.097 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.475 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.668 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.604 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.604 [231/265] Linking target lib/librte_eal.so.24.0 00:05:58.865 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:05:58.865 [233/265] Linking target lib/librte_ring.so.24.0 00:05:58.865 [234/265] Linking target lib/librte_meter.so.24.0 00:05:58.865 [235/265] Linking target lib/librte_timer.so.24.0 00:05:58.865 [236/265] Linking target lib/librte_pci.so.24.0 00:05:58.865 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:05:58.865 [238/265] Linking target lib/librte_dmadev.so.24.0 00:05:58.865 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:05:58.865 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:05:58.865 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:05:58.865 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:05:58.865 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:05:59.124 [244/265] Linking target lib/librte_mempool.so.24.0 00:05:59.124 [245/265] Linking target lib/librte_rcu.so.24.0 00:05:59.124 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:05:59.124 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:05:59.124 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:05:59.124 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:05:59.124 [250/265] Linking target lib/librte_mbuf.so.24.0 00:05:59.383 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:05:59.383 [252/265] Linking target lib/librte_net.so.24.0 00:05:59.383 [253/265] Linking target lib/librte_reorder.so.24.0 00:05:59.383 [254/265] Linking target lib/librte_compressdev.so.24.0 00:05:59.383 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:05:59.383 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:05:59.383 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:05:59.641 [258/265] Linking target lib/librte_hash.so.24.0 00:05:59.641 [259/265] Linking target lib/librte_cmdline.so.24.0 00:05:59.641 [260/265] Linking target lib/librte_ethdev.so.24.0 00:05:59.641 [261/265] Linking target lib/librte_security.so.24.0 00:05:59.641 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:05:59.641 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:05:59.641 [264/265] Linking target lib/librte_power.so.24.0 00:05:59.641 [265/265] Linking target lib/librte_vhost.so.24.0 00:05:59.900 INFO: autodetecting backend as ninja 00:05:59.900 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:06:00.469 CC lib/ut_mock/mock.o 00:06:00.469 CC lib/log/log.o 00:06:00.469 CC lib/log/log_flags.o 00:06:00.469 CC lib/log/log_deprecated.o 00:06:00.469 CC lib/ut/ut.o 00:06:00.728 LIB libspdk_ut_mock.a 00:06:00.728 SO libspdk_ut_mock.so.5.0 00:06:00.728 LIB libspdk_log.a 00:06:00.728 LIB libspdk_ut.a 00:06:00.728 SO libspdk_log.so.6.1 00:06:00.728 SYMLINK libspdk_ut_mock.so 00:06:00.728 SO libspdk_ut.so.1.0 00:06:00.728 SYMLINK libspdk_log.so 00:06:00.728 SYMLINK libspdk_ut.so 00:06:00.988 CXX lib/trace_parser/trace.o 00:06:00.988 CC lib/dma/dma.o 00:06:00.988 CC lib/util/base64.o 00:06:00.988 CC lib/util/bit_array.o 00:06:00.988 CC lib/util/cpuset.o 00:06:00.988 CC lib/util/crc16.o 00:06:00.988 CC lib/util/crc32.o 00:06:00.988 CC lib/util/crc32c.o 00:06:00.988 CC lib/util/dif.o 00:06:00.988 CC lib/util/crc32_ieee.o 00:06:00.988 CC lib/util/crc64.o 00:06:00.988 CC lib/util/fd.o 00:06:00.988 CC lib/ioat/ioat.o 00:06:00.988 CC lib/util/iov.o 00:06:00.988 CC lib/util/file.o 00:06:00.988 CC lib/util/hexlify.o 00:06:00.988 CC lib/util/math.o 00:06:00.988 CC lib/util/pipe.o 00:06:00.988 CC lib/util/strerror_tls.o 00:06:00.988 CC lib/util/string.o 00:06:00.988 CC lib/util/uuid.o 00:06:00.988 CC lib/util/fd_group.o 00:06:00.988 CC lib/util/xor.o 00:06:00.988 CC lib/util/zipf.o 00:06:00.988 CC lib/vfio_user/host/vfio_user_pci.o 00:06:00.988 CC lib/vfio_user/host/vfio_user.o 00:06:01.247 LIB libspdk_dma.a 00:06:01.247 SO libspdk_dma.so.3.0 00:06:01.247 SYMLINK libspdk_dma.so 00:06:01.247 LIB libspdk_ioat.a 00:06:01.247 SO libspdk_ioat.so.6.0 00:06:01.247 LIB libspdk_vfio_user.a 00:06:01.247 SO libspdk_vfio_user.so.4.0 00:06:01.247 SYMLINK libspdk_ioat.so 00:06:01.505 SYMLINK libspdk_vfio_user.so 00:06:01.505 LIB libspdk_util.a 00:06:01.505 SO libspdk_util.so.8.0 00:06:01.505 SYMLINK libspdk_util.so 00:06:01.763 LIB libspdk_trace_parser.a 00:06:01.763 SO libspdk_trace_parser.so.4.0 00:06:01.763 SYMLINK libspdk_trace_parser.so 00:06:01.763 CC lib/rdma/common.o 00:06:01.763 CC lib/rdma/rdma_verbs.o 00:06:01.763 CC lib/json/json_parse.o 00:06:01.763 CC lib/json/json_util.o 00:06:01.763 CC lib/json/json_write.o 00:06:01.763 CC lib/idxd/idxd.o 00:06:01.763 CC lib/idxd/idxd_user.o 00:06:01.763 CC lib/vmd/vmd.o 00:06:01.763 CC lib/conf/conf.o 00:06:01.763 CC lib/env_dpdk/env.o 00:06:01.763 CC lib/env_dpdk/memory.o 00:06:01.763 CC lib/vmd/led.o 00:06:01.763 CC lib/env_dpdk/pci.o 00:06:01.763 CC lib/env_dpdk/init.o 00:06:01.763 CC lib/env_dpdk/pci_virtio.o 00:06:01.763 CC lib/env_dpdk/threads.o 00:06:01.763 CC lib/env_dpdk/pci_ioat.o 00:06:01.763 CC lib/env_dpdk/pci_vmd.o 00:06:01.763 CC lib/env_dpdk/pci_idxd.o 00:06:01.763 CC lib/env_dpdk/pci_event.o 00:06:01.763 CC lib/env_dpdk/sigbus_handler.o 00:06:01.763 CC lib/env_dpdk/pci_dpdk.o 00:06:01.763 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:01.763 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:02.022 LIB libspdk_conf.a 00:06:02.022 LIB libspdk_rdma.a 00:06:02.022 LIB libspdk_json.a 00:06:02.022 SO libspdk_conf.so.5.0 00:06:02.022 SO libspdk_rdma.so.5.0 00:06:02.022 SO libspdk_json.so.5.1 00:06:02.022 SYMLINK libspdk_conf.so 00:06:02.022 SYMLINK libspdk_rdma.so 00:06:02.281 SYMLINK libspdk_json.so 00:06:02.281 LIB libspdk_idxd.a 00:06:02.281 SO libspdk_idxd.so.11.0 00:06:02.281 LIB libspdk_vmd.a 00:06:02.281 SYMLINK libspdk_idxd.so 00:06:02.281 SO libspdk_vmd.so.5.0 00:06:02.281 CC lib/jsonrpc/jsonrpc_server.o 00:06:02.281 CC lib/jsonrpc/jsonrpc_client.o 00:06:02.281 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:02.281 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:02.281 SYMLINK libspdk_vmd.so 00:06:02.540 LIB libspdk_jsonrpc.a 00:06:02.540 SO libspdk_jsonrpc.so.5.1 00:06:02.540 SYMLINK libspdk_jsonrpc.so 00:06:02.799 LIB libspdk_env_dpdk.a 00:06:02.799 CC lib/rpc/rpc.o 00:06:02.799 SO libspdk_env_dpdk.so.13.0 00:06:03.059 SYMLINK libspdk_env_dpdk.so 00:06:03.059 LIB libspdk_rpc.a 00:06:03.059 SO libspdk_rpc.so.5.0 00:06:03.059 SYMLINK libspdk_rpc.so 00:06:03.318 CC lib/trace/trace_flags.o 00:06:03.318 CC lib/trace/trace.o 00:06:03.318 CC lib/trace/trace_rpc.o 00:06:03.318 CC lib/notify/notify.o 00:06:03.318 CC lib/notify/notify_rpc.o 00:06:03.318 CC lib/sock/sock.o 00:06:03.318 CC lib/sock/sock_rpc.o 00:06:03.318 LIB libspdk_notify.a 00:06:03.318 SO libspdk_notify.so.5.0 00:06:03.576 LIB libspdk_trace.a 00:06:03.576 SYMLINK libspdk_notify.so 00:06:03.576 SO libspdk_trace.so.9.0 00:06:03.576 SYMLINK libspdk_trace.so 00:06:03.576 LIB libspdk_sock.a 00:06:03.576 SO libspdk_sock.so.8.0 00:06:03.576 SYMLINK libspdk_sock.so 00:06:03.835 CC lib/thread/thread.o 00:06:03.835 CC lib/thread/iobuf.o 00:06:03.835 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:03.835 CC lib/nvme/nvme_ctrlr.o 00:06:03.835 CC lib/nvme/nvme_fabric.o 00:06:03.835 CC lib/nvme/nvme_ns_cmd.o 00:06:03.835 CC lib/nvme/nvme_ns.o 00:06:03.835 CC lib/nvme/nvme_pcie_common.o 00:06:03.835 CC lib/nvme/nvme_pcie.o 00:06:03.835 CC lib/nvme/nvme_quirks.o 00:06:03.835 CC lib/nvme/nvme_qpair.o 00:06:03.835 CC lib/nvme/nvme.o 00:06:03.835 CC lib/nvme/nvme_transport.o 00:06:03.835 CC lib/nvme/nvme_discovery.o 00:06:03.835 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:03.835 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:03.835 CC lib/nvme/nvme_tcp.o 00:06:03.835 CC lib/nvme/nvme_opal.o 00:06:03.835 CC lib/nvme/nvme_poll_group.o 00:06:03.835 CC lib/nvme/nvme_io_msg.o 00:06:03.835 CC lib/nvme/nvme_cuse.o 00:06:03.835 CC lib/nvme/nvme_zns.o 00:06:03.835 CC lib/nvme/nvme_vfio_user.o 00:06:03.835 CC lib/nvme/nvme_rdma.o 00:06:04.832 LIB libspdk_thread.a 00:06:04.832 SO libspdk_thread.so.9.0 00:06:04.832 SYMLINK libspdk_thread.so 00:06:05.091 CC lib/accel/accel.o 00:06:05.091 CC lib/init/json_config.o 00:06:05.091 CC lib/init/subsystem.o 00:06:05.091 CC lib/accel/accel_rpc.o 00:06:05.091 CC lib/init/subsystem_rpc.o 00:06:05.091 CC lib/init/rpc.o 00:06:05.091 CC lib/accel/accel_sw.o 00:06:05.091 CC lib/blob/blobstore.o 00:06:05.091 CC lib/blob/request.o 00:06:05.091 CC lib/blob/zeroes.o 00:06:05.091 CC lib/blob/blob_bs_dev.o 00:06:05.091 CC lib/virtio/virtio_vhost_user.o 00:06:05.091 CC lib/virtio/virtio.o 00:06:05.091 CC lib/virtio/virtio_vfio_user.o 00:06:05.091 CC lib/virtio/virtio_pci.o 00:06:05.351 LIB libspdk_init.a 00:06:05.351 SO libspdk_init.so.4.0 00:06:05.351 LIB libspdk_nvme.a 00:06:05.351 LIB libspdk_virtio.a 00:06:05.351 SYMLINK libspdk_init.so 00:06:05.351 SO libspdk_virtio.so.6.0 00:06:05.351 SO libspdk_nvme.so.12.0 00:06:05.610 SYMLINK libspdk_virtio.so 00:06:05.610 CC lib/event/app.o 00:06:05.610 CC lib/event/reactor.o 00:06:05.610 CC lib/event/log_rpc.o 00:06:05.610 CC lib/event/app_rpc.o 00:06:05.610 CC lib/event/scheduler_static.o 00:06:05.610 SYMLINK libspdk_nvme.so 00:06:05.869 LIB libspdk_accel.a 00:06:05.869 SO libspdk_accel.so.14.0 00:06:05.869 LIB libspdk_event.a 00:06:05.869 SYMLINK libspdk_accel.so 00:06:05.869 SO libspdk_event.so.12.0 00:06:06.128 SYMLINK libspdk_event.so 00:06:06.128 CC lib/bdev/bdev.o 00:06:06.128 CC lib/bdev/bdev_rpc.o 00:06:06.128 CC lib/bdev/part.o 00:06:06.128 CC lib/bdev/bdev_zone.o 00:06:06.128 CC lib/bdev/scsi_nvme.o 00:06:07.067 LIB libspdk_blob.a 00:06:07.067 SO libspdk_blob.so.10.1 00:06:07.067 SYMLINK libspdk_blob.so 00:06:07.326 CC lib/lvol/lvol.o 00:06:07.326 CC lib/blobfs/blobfs.o 00:06:07.326 CC lib/blobfs/tree.o 00:06:07.895 LIB libspdk_blobfs.a 00:06:07.895 LIB libspdk_bdev.a 00:06:07.895 SO libspdk_blobfs.so.9.0 00:06:07.895 SO libspdk_bdev.so.14.0 00:06:07.896 LIB libspdk_lvol.a 00:06:07.896 SO libspdk_lvol.so.9.1 00:06:07.896 SYMLINK libspdk_blobfs.so 00:06:07.896 SYMLINK libspdk_bdev.so 00:06:07.896 SYMLINK libspdk_lvol.so 00:06:08.156 CC lib/ublk/ublk.o 00:06:08.156 CC lib/ublk/ublk_rpc.o 00:06:08.156 CC lib/nbd/nbd.o 00:06:08.156 CC lib/nbd/nbd_rpc.o 00:06:08.156 CC lib/ftl/ftl_core.o 00:06:08.156 CC lib/ftl/ftl_init.o 00:06:08.156 CC lib/ftl/ftl_layout.o 00:06:08.156 CC lib/ftl/ftl_debug.o 00:06:08.156 CC lib/ftl/ftl_io.o 00:06:08.156 CC lib/ftl/ftl_sb.o 00:06:08.156 CC lib/ftl/ftl_l2p.o 00:06:08.156 CC lib/ftl/ftl_nv_cache.o 00:06:08.156 CC lib/ftl/ftl_l2p_flat.o 00:06:08.156 CC lib/ftl/ftl_writer.o 00:06:08.156 CC lib/ftl/ftl_band.o 00:06:08.156 CC lib/ftl/ftl_band_ops.o 00:06:08.156 CC lib/nvmf/ctrlr_discovery.o 00:06:08.156 CC lib/nvmf/ctrlr.o 00:06:08.156 CC lib/ftl/ftl_rq.o 00:06:08.156 CC lib/ftl/ftl_reloc.o 00:06:08.156 CC lib/ftl/ftl_l2p_cache.o 00:06:08.156 CC lib/nvmf/ctrlr_bdev.o 00:06:08.156 CC lib/ftl/ftl_p2l.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt.o 00:06:08.156 CC lib/nvmf/subsystem.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:08.156 CC lib/nvmf/nvmf.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:08.156 CC lib/scsi/lun.o 00:06:08.156 CC lib/nvmf/nvmf_rpc.o 00:06:08.156 CC lib/nvmf/transport.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:08.156 CC lib/scsi/dev.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:08.156 CC lib/nvmf/tcp.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:08.156 CC lib/scsi/port.o 00:06:08.156 CC lib/nvmf/rdma.o 00:06:08.156 CC lib/scsi/scsi_bdev.o 00:06:08.156 CC lib/scsi/scsi.o 00:06:08.156 CC lib/scsi/scsi_pr.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:08.156 CC lib/scsi/task.o 00:06:08.156 CC lib/scsi/scsi_rpc.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:08.156 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:08.156 CC lib/ftl/utils/ftl_conf.o 00:06:08.156 CC lib/ftl/utils/ftl_mempool.o 00:06:08.156 CC lib/ftl/utils/ftl_md.o 00:06:08.156 CC lib/ftl/utils/ftl_bitmap.o 00:06:08.156 CC lib/ftl/utils/ftl_property.o 00:06:08.156 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:08.156 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:08.156 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:08.156 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:08.156 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:08.156 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:08.156 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:08.156 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:08.156 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:08.156 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:08.156 CC lib/ftl/base/ftl_base_dev.o 00:06:08.156 CC lib/ftl/base/ftl_base_bdev.o 00:06:08.156 CC lib/ftl/ftl_trace.o 00:06:08.723 LIB libspdk_nbd.a 00:06:08.723 SO libspdk_nbd.so.6.0 00:06:08.723 LIB libspdk_scsi.a 00:06:08.723 SYMLINK libspdk_nbd.so 00:06:08.723 SO libspdk_scsi.so.8.0 00:06:08.723 SYMLINK libspdk_scsi.so 00:06:08.723 LIB libspdk_ublk.a 00:06:08.982 SO libspdk_ublk.so.2.0 00:06:08.982 SYMLINK libspdk_ublk.so 00:06:08.982 LIB libspdk_ftl.a 00:06:08.982 CC lib/iscsi/conn.o 00:06:08.982 CC lib/iscsi/init_grp.o 00:06:08.982 CC lib/iscsi/iscsi.o 00:06:08.982 CC lib/iscsi/md5.o 00:06:08.982 CC lib/iscsi/param.o 00:06:08.982 CC lib/iscsi/iscsi_subsystem.o 00:06:08.982 CC lib/iscsi/portal_grp.o 00:06:08.982 CC lib/iscsi/tgt_node.o 00:06:08.982 CC lib/iscsi/iscsi_rpc.o 00:06:08.982 CC lib/iscsi/task.o 00:06:08.982 CC lib/vhost/vhost.o 00:06:08.982 CC lib/vhost/vhost_blk.o 00:06:08.982 CC lib/vhost/vhost_rpc.o 00:06:08.982 CC lib/vhost/vhost_scsi.o 00:06:08.982 CC lib/vhost/rte_vhost_user.o 00:06:08.982 SO libspdk_ftl.so.8.0 00:06:09.250 SYMLINK libspdk_ftl.so 00:06:09.820 LIB libspdk_vhost.a 00:06:09.820 LIB libspdk_nvmf.a 00:06:09.820 SO libspdk_vhost.so.7.1 00:06:09.820 SO libspdk_nvmf.so.17.0 00:06:09.820 SYMLINK libspdk_vhost.so 00:06:09.820 LIB libspdk_iscsi.a 00:06:10.080 SO libspdk_iscsi.so.7.0 00:06:10.080 SYMLINK libspdk_nvmf.so 00:06:10.080 SYMLINK libspdk_iscsi.so 00:06:10.339 CC module/env_dpdk/env_dpdk_rpc.o 00:06:10.597 CC module/accel/error/accel_error.o 00:06:10.597 CC module/accel/error/accel_error_rpc.o 00:06:10.597 CC module/accel/dsa/accel_dsa.o 00:06:10.597 CC module/accel/dsa/accel_dsa_rpc.o 00:06:10.597 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:10.597 CC module/scheduler/gscheduler/gscheduler.o 00:06:10.597 CC module/accel/ioat/accel_ioat.o 00:06:10.597 CC module/accel/ioat/accel_ioat_rpc.o 00:06:10.597 CC module/accel/iaa/accel_iaa.o 00:06:10.597 CC module/accel/iaa/accel_iaa_rpc.o 00:06:10.597 CC module/blob/bdev/blob_bdev.o 00:06:10.597 CC module/sock/posix/posix.o 00:06:10.597 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:10.597 LIB libspdk_env_dpdk_rpc.a 00:06:10.597 SO libspdk_env_dpdk_rpc.so.5.0 00:06:10.597 SYMLINK libspdk_env_dpdk_rpc.so 00:06:10.597 LIB libspdk_scheduler_gscheduler.a 00:06:10.597 LIB libspdk_accel_error.a 00:06:10.597 LIB libspdk_scheduler_dpdk_governor.a 00:06:10.597 SO libspdk_accel_error.so.1.0 00:06:10.597 LIB libspdk_accel_ioat.a 00:06:10.597 SO libspdk_scheduler_gscheduler.so.3.0 00:06:10.597 SO libspdk_scheduler_dpdk_governor.so.3.0 00:06:10.597 LIB libspdk_accel_dsa.a 00:06:10.597 LIB libspdk_scheduler_dynamic.a 00:06:10.597 LIB libspdk_accel_iaa.a 00:06:10.597 SO libspdk_accel_ioat.so.5.0 00:06:10.857 SYMLINK libspdk_accel_error.so 00:06:10.857 SYMLINK libspdk_scheduler_gscheduler.so 00:06:10.857 SO libspdk_scheduler_dynamic.so.3.0 00:06:10.857 SO libspdk_accel_dsa.so.4.0 00:06:10.857 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:10.857 SO libspdk_accel_iaa.so.2.0 00:06:10.857 LIB libspdk_blob_bdev.a 00:06:10.857 SYMLINK libspdk_accel_ioat.so 00:06:10.857 SYMLINK libspdk_scheduler_dynamic.so 00:06:10.857 SO libspdk_blob_bdev.so.10.1 00:06:10.857 SYMLINK libspdk_accel_dsa.so 00:06:10.857 SYMLINK libspdk_accel_iaa.so 00:06:10.857 SYMLINK libspdk_blob_bdev.so 00:06:11.116 LIB libspdk_sock_posix.a 00:06:11.116 CC module/bdev/error/vbdev_error.o 00:06:11.116 CC module/bdev/error/vbdev_error_rpc.o 00:06:11.116 SO libspdk_sock_posix.so.5.0 00:06:11.116 CC module/bdev/delay/vbdev_delay.o 00:06:11.116 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:11.116 CC module/bdev/split/vbdev_split_rpc.o 00:06:11.116 CC module/blobfs/bdev/blobfs_bdev.o 00:06:11.116 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:11.116 CC module/bdev/split/vbdev_split.o 00:06:11.116 CC module/bdev/nvme/bdev_nvme.o 00:06:11.116 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:11.116 CC module/bdev/raid/bdev_raid.o 00:06:11.116 CC module/bdev/nvme/nvme_rpc.o 00:06:11.116 CC module/bdev/iscsi/bdev_iscsi.o 00:06:11.116 CC module/bdev/nvme/bdev_mdns_client.o 00:06:11.116 CC module/bdev/gpt/gpt.o 00:06:11.116 CC module/bdev/raid/bdev_raid_rpc.o 00:06:11.116 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:11.116 CC module/bdev/raid/bdev_raid_sb.o 00:06:11.116 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:11.116 CC module/bdev/nvme/vbdev_opal.o 00:06:11.116 CC module/bdev/gpt/vbdev_gpt.o 00:06:11.116 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:11.116 CC module/bdev/raid/raid0.o 00:06:11.116 CC module/bdev/lvol/vbdev_lvol.o 00:06:11.116 CC module/bdev/raid/raid1.o 00:06:11.116 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:11.116 CC module/bdev/raid/concat.o 00:06:11.116 CC module/bdev/null/bdev_null_rpc.o 00:06:11.116 CC module/bdev/null/bdev_null.o 00:06:11.116 CC module/bdev/aio/bdev_aio.o 00:06:11.116 CC module/bdev/passthru/vbdev_passthru.o 00:06:11.116 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:11.116 CC module/bdev/aio/bdev_aio_rpc.o 00:06:11.116 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:11.116 CC module/bdev/malloc/bdev_malloc.o 00:06:11.116 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:11.116 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:11.116 CC module/bdev/ftl/bdev_ftl.o 00:06:11.116 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:11.116 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:11.116 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:11.116 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:11.116 SYMLINK libspdk_sock_posix.so 00:06:11.375 LIB libspdk_blobfs_bdev.a 00:06:11.375 LIB libspdk_bdev_split.a 00:06:11.375 SO libspdk_bdev_split.so.5.0 00:06:11.375 LIB libspdk_bdev_error.a 00:06:11.375 SO libspdk_blobfs_bdev.so.5.0 00:06:11.375 LIB libspdk_bdev_null.a 00:06:11.375 LIB libspdk_bdev_gpt.a 00:06:11.375 LIB libspdk_bdev_ftl.a 00:06:11.375 SO libspdk_bdev_error.so.5.0 00:06:11.375 LIB libspdk_bdev_passthru.a 00:06:11.375 SYMLINK libspdk_bdev_split.so 00:06:11.375 SO libspdk_bdev_null.so.5.0 00:06:11.375 SYMLINK libspdk_blobfs_bdev.so 00:06:11.375 LIB libspdk_bdev_aio.a 00:06:11.375 SO libspdk_bdev_gpt.so.5.0 00:06:11.375 SO libspdk_bdev_ftl.so.5.0 00:06:11.375 LIB libspdk_bdev_delay.a 00:06:11.375 LIB libspdk_bdev_zone_block.a 00:06:11.375 SO libspdk_bdev_passthru.so.5.0 00:06:11.375 SO libspdk_bdev_aio.so.5.0 00:06:11.375 SYMLINK libspdk_bdev_error.so 00:06:11.375 SO libspdk_bdev_delay.so.5.0 00:06:11.375 LIB libspdk_bdev_malloc.a 00:06:11.375 SO libspdk_bdev_zone_block.so.5.0 00:06:11.634 LIB libspdk_bdev_iscsi.a 00:06:11.634 SYMLINK libspdk_bdev_ftl.so 00:06:11.634 SYMLINK libspdk_bdev_null.so 00:06:11.634 SYMLINK libspdk_bdev_gpt.so 00:06:11.634 SO libspdk_bdev_iscsi.so.5.0 00:06:11.634 SO libspdk_bdev_malloc.so.5.0 00:06:11.634 SYMLINK libspdk_bdev_passthru.so 00:06:11.634 SYMLINK libspdk_bdev_delay.so 00:06:11.634 SYMLINK libspdk_bdev_aio.so 00:06:11.634 SYMLINK libspdk_bdev_zone_block.so 00:06:11.634 LIB libspdk_bdev_lvol.a 00:06:11.634 SYMLINK libspdk_bdev_iscsi.so 00:06:11.634 SYMLINK libspdk_bdev_malloc.so 00:06:11.634 LIB libspdk_bdev_virtio.a 00:06:11.634 SO libspdk_bdev_lvol.so.5.0 00:06:11.634 SO libspdk_bdev_virtio.so.5.0 00:06:11.634 SYMLINK libspdk_bdev_lvol.so 00:06:11.634 SYMLINK libspdk_bdev_virtio.so 00:06:11.893 LIB libspdk_bdev_raid.a 00:06:11.893 SO libspdk_bdev_raid.so.5.0 00:06:11.893 SYMLINK libspdk_bdev_raid.so 00:06:12.831 LIB libspdk_bdev_nvme.a 00:06:12.831 SO libspdk_bdev_nvme.so.6.0 00:06:12.831 SYMLINK libspdk_bdev_nvme.so 00:06:13.091 CC module/event/subsystems/iobuf/iobuf.o 00:06:13.091 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:13.091 CC module/event/subsystems/scheduler/scheduler.o 00:06:13.091 CC module/event/subsystems/vmd/vmd.o 00:06:13.091 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:13.351 CC module/event/subsystems/sock/sock.o 00:06:13.351 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:13.351 LIB libspdk_event_vmd.a 00:06:13.351 LIB libspdk_event_scheduler.a 00:06:13.351 LIB libspdk_event_iobuf.a 00:06:13.351 LIB libspdk_event_sock.a 00:06:13.351 SO libspdk_event_vmd.so.5.0 00:06:13.351 LIB libspdk_event_vhost_blk.a 00:06:13.351 SO libspdk_event_scheduler.so.3.0 00:06:13.351 SO libspdk_event_iobuf.so.2.0 00:06:13.351 SO libspdk_event_sock.so.4.0 00:06:13.351 SO libspdk_event_vhost_blk.so.2.0 00:06:13.351 SYMLINK libspdk_event_scheduler.so 00:06:13.351 SYMLINK libspdk_event_vmd.so 00:06:13.351 SYMLINK libspdk_event_iobuf.so 00:06:13.351 SYMLINK libspdk_event_sock.so 00:06:13.351 SYMLINK libspdk_event_vhost_blk.so 00:06:13.611 CC module/event/subsystems/accel/accel.o 00:06:13.870 LIB libspdk_event_accel.a 00:06:13.870 SO libspdk_event_accel.so.5.0 00:06:13.870 SYMLINK libspdk_event_accel.so 00:06:14.129 CC module/event/subsystems/bdev/bdev.o 00:06:14.129 LIB libspdk_event_bdev.a 00:06:14.129 SO libspdk_event_bdev.so.5.0 00:06:14.129 SYMLINK libspdk_event_bdev.so 00:06:14.388 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:14.388 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:14.388 CC module/event/subsystems/ublk/ublk.o 00:06:14.388 CC module/event/subsystems/nbd/nbd.o 00:06:14.388 CC module/event/subsystems/scsi/scsi.o 00:06:14.647 LIB libspdk_event_ublk.a 00:06:14.647 SO libspdk_event_ublk.so.2.0 00:06:14.647 LIB libspdk_event_nbd.a 00:06:14.647 LIB libspdk_event_scsi.a 00:06:14.647 LIB libspdk_event_nvmf.a 00:06:14.647 SO libspdk_event_nbd.so.5.0 00:06:14.647 SO libspdk_event_nvmf.so.5.0 00:06:14.647 SYMLINK libspdk_event_ublk.so 00:06:14.647 SO libspdk_event_scsi.so.5.0 00:06:14.647 SYMLINK libspdk_event_nvmf.so 00:06:14.647 SYMLINK libspdk_event_nbd.so 00:06:14.647 SYMLINK libspdk_event_scsi.so 00:06:14.907 CC module/event/subsystems/iscsi/iscsi.o 00:06:14.907 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:14.907 LIB libspdk_event_vhost_scsi.a 00:06:14.907 LIB libspdk_event_iscsi.a 00:06:15.166 SO libspdk_event_vhost_scsi.so.2.0 00:06:15.166 SO libspdk_event_iscsi.so.5.0 00:06:15.166 SYMLINK libspdk_event_vhost_scsi.so 00:06:15.166 SYMLINK libspdk_event_iscsi.so 00:06:15.166 SO libspdk.so.5.0 00:06:15.166 SYMLINK libspdk.so 00:06:15.436 CC test/rpc_client/rpc_client_test.o 00:06:15.436 CC app/spdk_nvme_perf/perf.o 00:06:15.436 CXX app/trace/trace.o 00:06:15.436 CC app/spdk_lspci/spdk_lspci.o 00:06:15.436 CC app/trace_record/trace_record.o 00:06:15.437 CC app/spdk_nvme_discover/discovery_aer.o 00:06:15.437 CC app/spdk_nvme_identify/identify.o 00:06:15.437 CC app/spdk_top/spdk_top.o 00:06:15.437 TEST_HEADER include/spdk/accel.h 00:06:15.437 TEST_HEADER include/spdk/accel_module.h 00:06:15.437 TEST_HEADER include/spdk/assert.h 00:06:15.437 TEST_HEADER include/spdk/barrier.h 00:06:15.437 TEST_HEADER include/spdk/bdev.h 00:06:15.437 TEST_HEADER include/spdk/base64.h 00:06:15.437 TEST_HEADER include/spdk/bdev_module.h 00:06:15.437 TEST_HEADER include/spdk/bdev_zone.h 00:06:15.437 TEST_HEADER include/spdk/bit_array.h 00:06:15.437 TEST_HEADER include/spdk/bit_pool.h 00:06:15.437 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:15.437 TEST_HEADER include/spdk/blobfs.h 00:06:15.437 TEST_HEADER include/spdk/blob_bdev.h 00:06:15.437 TEST_HEADER include/spdk/blob.h 00:06:15.437 TEST_HEADER include/spdk/config.h 00:06:15.437 CC app/nvmf_tgt/nvmf_main.o 00:06:15.437 TEST_HEADER include/spdk/conf.h 00:06:15.437 TEST_HEADER include/spdk/cpuset.h 00:06:15.437 TEST_HEADER include/spdk/crc16.h 00:06:15.437 TEST_HEADER include/spdk/crc32.h 00:06:15.437 TEST_HEADER include/spdk/crc64.h 00:06:15.437 TEST_HEADER include/spdk/dma.h 00:06:15.437 TEST_HEADER include/spdk/dif.h 00:06:15.437 TEST_HEADER include/spdk/env_dpdk.h 00:06:15.437 TEST_HEADER include/spdk/endian.h 00:06:15.437 TEST_HEADER include/spdk/event.h 00:06:15.437 TEST_HEADER include/spdk/env.h 00:06:15.437 TEST_HEADER include/spdk/fd.h 00:06:15.437 TEST_HEADER include/spdk/fd_group.h 00:06:15.437 TEST_HEADER include/spdk/file.h 00:06:15.437 CC app/spdk_dd/spdk_dd.o 00:06:15.437 TEST_HEADER include/spdk/ftl.h 00:06:15.437 TEST_HEADER include/spdk/hexlify.h 00:06:15.437 TEST_HEADER include/spdk/gpt_spec.h 00:06:15.437 TEST_HEADER include/spdk/histogram_data.h 00:06:15.437 TEST_HEADER include/spdk/idxd.h 00:06:15.437 TEST_HEADER include/spdk/init.h 00:06:15.437 TEST_HEADER include/spdk/idxd_spec.h 00:06:15.437 TEST_HEADER include/spdk/ioat.h 00:06:15.437 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:15.437 TEST_HEADER include/spdk/ioat_spec.h 00:06:15.437 CC app/iscsi_tgt/iscsi_tgt.o 00:06:15.437 TEST_HEADER include/spdk/iscsi_spec.h 00:06:15.437 TEST_HEADER include/spdk/json.h 00:06:15.437 TEST_HEADER include/spdk/jsonrpc.h 00:06:15.437 TEST_HEADER include/spdk/likely.h 00:06:15.437 TEST_HEADER include/spdk/log.h 00:06:15.437 TEST_HEADER include/spdk/memory.h 00:06:15.437 TEST_HEADER include/spdk/lvol.h 00:06:15.437 CC app/vhost/vhost.o 00:06:15.437 TEST_HEADER include/spdk/mmio.h 00:06:15.437 TEST_HEADER include/spdk/nbd.h 00:06:15.437 TEST_HEADER include/spdk/nvme.h 00:06:15.437 CC app/spdk_tgt/spdk_tgt.o 00:06:15.437 TEST_HEADER include/spdk/nvme_intel.h 00:06:15.437 TEST_HEADER include/spdk/notify.h 00:06:15.437 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:15.437 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:15.437 TEST_HEADER include/spdk/nvme_zns.h 00:06:15.437 TEST_HEADER include/spdk/nvme_spec.h 00:06:15.437 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:15.437 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:15.437 TEST_HEADER include/spdk/nvmf.h 00:06:15.437 TEST_HEADER include/spdk/nvmf_transport.h 00:06:15.437 TEST_HEADER include/spdk/nvmf_spec.h 00:06:15.437 TEST_HEADER include/spdk/opal_spec.h 00:06:15.437 TEST_HEADER include/spdk/opal.h 00:06:15.437 TEST_HEADER include/spdk/pci_ids.h 00:06:15.437 TEST_HEADER include/spdk/queue.h 00:06:15.437 TEST_HEADER include/spdk/pipe.h 00:06:15.437 TEST_HEADER include/spdk/reduce.h 00:06:15.437 TEST_HEADER include/spdk/rpc.h 00:06:15.437 TEST_HEADER include/spdk/scheduler.h 00:06:15.437 TEST_HEADER include/spdk/scsi.h 00:06:15.437 TEST_HEADER include/spdk/sock.h 00:06:15.437 TEST_HEADER include/spdk/scsi_spec.h 00:06:15.437 TEST_HEADER include/spdk/stdinc.h 00:06:15.437 TEST_HEADER include/spdk/thread.h 00:06:15.437 TEST_HEADER include/spdk/string.h 00:06:15.437 TEST_HEADER include/spdk/trace.h 00:06:15.437 TEST_HEADER include/spdk/tree.h 00:06:15.437 TEST_HEADER include/spdk/trace_parser.h 00:06:15.437 TEST_HEADER include/spdk/ublk.h 00:06:15.437 TEST_HEADER include/spdk/uuid.h 00:06:15.437 TEST_HEADER include/spdk/version.h 00:06:15.437 CC test/event/reactor/reactor.o 00:06:15.437 TEST_HEADER include/spdk/util.h 00:06:15.437 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:15.437 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:15.437 TEST_HEADER include/spdk/vmd.h 00:06:15.437 TEST_HEADER include/spdk/vhost.h 00:06:15.437 TEST_HEADER include/spdk/zipf.h 00:06:15.437 TEST_HEADER include/spdk/xor.h 00:06:15.437 CXX test/cpp_headers/accel.o 00:06:15.707 CXX test/cpp_headers/accel_module.o 00:06:15.707 CXX test/cpp_headers/barrier.o 00:06:15.707 CXX test/cpp_headers/base64.o 00:06:15.707 CXX test/cpp_headers/assert.o 00:06:15.707 CC test/nvme/aer/aer.o 00:06:15.707 CC test/nvme/err_injection/err_injection.o 00:06:15.707 CXX test/cpp_headers/bdev_zone.o 00:06:15.707 CC test/nvme/reset/reset.o 00:06:15.707 CXX test/cpp_headers/bdev.o 00:06:15.707 CC test/event/reactor_perf/reactor_perf.o 00:06:15.707 CXX test/cpp_headers/bit_pool.o 00:06:15.707 CXX test/cpp_headers/bdev_module.o 00:06:15.707 CC test/nvme/cuse/cuse.o 00:06:15.707 CXX test/cpp_headers/bit_array.o 00:06:15.707 CXX test/cpp_headers/blobfs_bdev.o 00:06:15.707 CC test/nvme/fused_ordering/fused_ordering.o 00:06:15.707 CXX test/cpp_headers/blob_bdev.o 00:06:15.707 CXX test/cpp_headers/blobfs.o 00:06:15.707 CC examples/ioat/perf/perf.o 00:06:15.707 CXX test/cpp_headers/blob.o 00:06:15.707 CXX test/cpp_headers/conf.o 00:06:15.707 CXX test/cpp_headers/cpuset.o 00:06:15.707 CXX test/cpp_headers/config.o 00:06:15.707 CC examples/vmd/led/led.o 00:06:15.707 CC test/env/pci/pci_ut.o 00:06:15.707 CC test/nvme/sgl/sgl.o 00:06:15.707 CC test/env/memory/memory_ut.o 00:06:15.707 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:15.707 CXX test/cpp_headers/crc32.o 00:06:15.707 CC test/nvme/e2edp/nvme_dp.o 00:06:15.707 CXX test/cpp_headers/crc16.o 00:06:15.707 CC test/event/event_perf/event_perf.o 00:06:15.707 CC test/nvme/boot_partition/boot_partition.o 00:06:15.707 CC test/nvme/connect_stress/connect_stress.o 00:06:15.707 CC test/nvme/simple_copy/simple_copy.o 00:06:15.707 CC test/env/vtophys/vtophys.o 00:06:15.707 CC app/fio/nvme/fio_plugin.o 00:06:15.707 CC examples/util/zipf/zipf.o 00:06:15.707 CC test/thread/poller_perf/poller_perf.o 00:06:15.707 CC test/app/histogram_perf/histogram_perf.o 00:06:15.707 CC test/app/stub/stub.o 00:06:15.707 CC test/nvme/overhead/overhead.o 00:06:15.707 CC test/app/jsoncat/jsoncat.o 00:06:15.707 CC examples/sock/hello_world/hello_sock.o 00:06:15.707 CC test/dma/test_dma/test_dma.o 00:06:15.707 CC examples/idxd/perf/perf.o 00:06:15.707 CC examples/ioat/verify/verify.o 00:06:15.707 CC test/event/app_repeat/app_repeat.o 00:06:15.707 CC test/nvme/startup/startup.o 00:06:15.707 CC examples/nvme/abort/abort.o 00:06:15.707 CC test/nvme/reserve/reserve.o 00:06:15.707 CC examples/nvme/hello_world/hello_world.o 00:06:15.707 CC test/nvme/compliance/nvme_compliance.o 00:06:15.707 CC examples/accel/perf/accel_perf.o 00:06:15.707 CC test/nvme/fdp/fdp.o 00:06:15.707 CC test/bdev/bdevio/bdevio.o 00:06:15.707 CC examples/vmd/lsvmd/lsvmd.o 00:06:15.707 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:15.707 CC examples/nvme/hotplug/hotplug.o 00:06:15.707 CC app/fio/bdev/fio_plugin.o 00:06:15.707 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:15.707 CXX test/cpp_headers/crc64.o 00:06:15.707 CC examples/nvme/arbitration/arbitration.o 00:06:15.707 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:15.707 CC test/accel/dif/dif.o 00:06:15.707 CC examples/nvme/reconnect/reconnect.o 00:06:15.707 CC examples/thread/thread/thread_ex.o 00:06:15.707 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:15.707 CC test/event/scheduler/scheduler.o 00:06:15.707 CC examples/blob/cli/blobcli.o 00:06:15.707 CC test/blobfs/mkfs/mkfs.o 00:06:15.707 CC examples/bdev/hello_world/hello_bdev.o 00:06:15.707 CC examples/bdev/bdevperf/bdevperf.o 00:06:15.707 CC examples/nvmf/nvmf/nvmf.o 00:06:15.707 CC test/app/bdev_svc/bdev_svc.o 00:06:15.707 CC examples/blob/hello_world/hello_blob.o 00:06:15.707 CC test/lvol/esnap/esnap.o 00:06:15.707 CC test/env/mem_callbacks/mem_callbacks.o 00:06:15.707 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:15.707 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:15.707 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:15.707 LINK rpc_client_test 00:06:15.966 LINK interrupt_tgt 00:06:15.966 LINK vhost 00:06:15.967 LINK spdk_lspci 00:06:15.967 LINK reactor 00:06:15.967 LINK reactor_perf 00:06:15.967 LINK led 00:06:15.967 LINK vtophys 00:06:15.967 LINK event_perf 00:06:15.967 LINK boot_partition 00:06:15.967 LINK poller_perf 00:06:15.967 LINK jsoncat 00:06:15.967 LINK zipf 00:06:15.967 LINK err_injection 00:06:15.967 LINK nvmf_tgt 00:06:15.967 LINK fused_ordering 00:06:15.967 LINK spdk_nvme_discover 00:06:15.967 LINK stub 00:06:15.967 LINK connect_stress 00:06:15.967 LINK lsvmd 00:06:15.967 LINK ioat_perf 00:06:15.967 LINK cmb_copy 00:06:15.967 LINK reserve 00:06:15.967 LINK simple_copy 00:06:15.967 LINK spdk_trace_record 00:06:15.967 LINK verify 00:06:15.967 LINK iscsi_tgt 00:06:15.967 LINK app_repeat 00:06:15.967 LINK hello_world 00:06:15.967 LINK sgl 00:06:15.967 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:15.967 CXX test/cpp_headers/dif.o 00:06:15.967 CXX test/cpp_headers/dma.o 00:06:15.967 LINK spdk_tgt 00:06:15.967 LINK hello_sock 00:06:15.967 CXX test/cpp_headers/env_dpdk.o 00:06:15.967 CXX test/cpp_headers/endian.o 00:06:15.967 LINK histogram_perf 00:06:15.967 LINK startup 00:06:16.229 LINK doorbell_aers 00:06:16.229 CXX test/cpp_headers/env.o 00:06:16.229 CXX test/cpp_headers/event.o 00:06:16.229 LINK env_dpdk_post_init 00:06:16.229 CXX test/cpp_headers/fd_group.o 00:06:16.229 LINK spdk_dd 00:06:16.229 CXX test/cpp_headers/fd.o 00:06:16.229 LINK pmr_persistence 00:06:16.229 LINK hello_bdev 00:06:16.229 LINK nvme_compliance 00:06:16.229 LINK bdev_svc 00:06:16.229 CXX test/cpp_headers/file.o 00:06:16.229 LINK mkfs 00:06:16.229 CXX test/cpp_headers/ftl.o 00:06:16.229 CXX test/cpp_headers/gpt_spec.o 00:06:16.229 LINK scheduler 00:06:16.229 CXX test/cpp_headers/hexlify.o 00:06:16.229 LINK nvme_dp 00:06:16.229 CXX test/cpp_headers/histogram_data.o 00:06:16.229 LINK hotplug 00:06:16.229 CXX test/cpp_headers/idxd.o 00:06:16.229 CXX test/cpp_headers/idxd_spec.o 00:06:16.229 LINK nvmf 00:06:16.229 CXX test/cpp_headers/init.o 00:06:16.229 CXX test/cpp_headers/ioat.o 00:06:16.229 CXX test/cpp_headers/ioat_spec.o 00:06:16.229 CXX test/cpp_headers/iscsi_spec.o 00:06:16.229 CXX test/cpp_headers/json.o 00:06:16.229 CXX test/cpp_headers/likely.o 00:06:16.229 CXX test/cpp_headers/log.o 00:06:16.229 CXX test/cpp_headers/jsonrpc.o 00:06:16.229 LINK hello_blob 00:06:16.229 LINK thread 00:06:16.229 CXX test/cpp_headers/lvol.o 00:06:16.229 LINK test_dma 00:06:16.229 CXX test/cpp_headers/memory.o 00:06:16.229 CXX test/cpp_headers/mmio.o 00:06:16.229 LINK reset 00:06:16.229 CXX test/cpp_headers/nbd.o 00:06:16.229 LINK overhead 00:06:16.229 CXX test/cpp_headers/notify.o 00:06:16.229 CXX test/cpp_headers/nvme.o 00:06:16.229 CXX test/cpp_headers/nvme_intel.o 00:06:16.229 CXX test/cpp_headers/nvme_ocssd.o 00:06:16.229 CXX test/cpp_headers/nvme_spec.o 00:06:16.229 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:16.229 LINK aer 00:06:16.229 CXX test/cpp_headers/nvme_zns.o 00:06:16.229 CXX test/cpp_headers/nvmf_cmd.o 00:06:16.229 LINK bdevio 00:06:16.229 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:16.229 CXX test/cpp_headers/nvmf.o 00:06:16.229 CXX test/cpp_headers/nvmf_spec.o 00:06:16.229 CXX test/cpp_headers/nvmf_transport.o 00:06:16.229 CXX test/cpp_headers/opal.o 00:06:16.229 CXX test/cpp_headers/opal_spec.o 00:06:16.229 CXX test/cpp_headers/pci_ids.o 00:06:16.229 LINK fdp 00:06:16.229 CXX test/cpp_headers/pipe.o 00:06:16.229 CXX test/cpp_headers/reduce.o 00:06:16.229 CXX test/cpp_headers/queue.o 00:06:16.488 CXX test/cpp_headers/rpc.o 00:06:16.489 CXX test/cpp_headers/scheduler.o 00:06:16.489 CXX test/cpp_headers/scsi.o 00:06:16.489 CXX test/cpp_headers/scsi_spec.o 00:06:16.489 CXX test/cpp_headers/stdinc.o 00:06:16.489 CXX test/cpp_headers/sock.o 00:06:16.489 CXX test/cpp_headers/string.o 00:06:16.489 CXX test/cpp_headers/thread.o 00:06:16.489 CXX test/cpp_headers/trace.o 00:06:16.489 CXX test/cpp_headers/trace_parser.o 00:06:16.489 CXX test/cpp_headers/tree.o 00:06:16.489 LINK arbitration 00:06:16.489 LINK accel_perf 00:06:16.489 CXX test/cpp_headers/ublk.o 00:06:16.489 LINK reconnect 00:06:16.489 CXX test/cpp_headers/util.o 00:06:16.489 CXX test/cpp_headers/uuid.o 00:06:16.489 CXX test/cpp_headers/version.o 00:06:16.489 CXX test/cpp_headers/vfio_user_pci.o 00:06:16.489 CXX test/cpp_headers/vfio_user_spec.o 00:06:16.489 CXX test/cpp_headers/vhost.o 00:06:16.489 LINK spdk_bdev 00:06:16.489 LINK idxd_perf 00:06:16.489 LINK nvme_fuzz 00:06:16.489 LINK nvme_manage 00:06:16.489 LINK spdk_trace 00:06:16.489 CXX test/cpp_headers/xor.o 00:06:16.489 CXX test/cpp_headers/vmd.o 00:06:16.489 CXX test/cpp_headers/zipf.o 00:06:16.489 LINK abort 00:06:16.489 LINK pci_ut 00:06:16.489 LINK dif 00:06:16.750 LINK vhost_fuzz 00:06:16.750 LINK blobcli 00:06:16.750 LINK spdk_nvme 00:06:16.750 LINK mem_callbacks 00:06:16.750 LINK spdk_nvme_identify 00:06:16.750 LINK spdk_top 00:06:16.750 LINK spdk_nvme_perf 00:06:17.009 LINK bdevperf 00:06:17.009 LINK memory_ut 00:06:17.009 LINK cuse 00:06:17.577 LINK iscsi_fuzz 00:06:19.483 LINK esnap 00:06:19.742 00:06:19.742 real 0m39.337s 00:06:19.742 user 6m11.757s 00:06:19.742 sys 3m6.069s 00:06:19.742 10:01:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:06:19.742 10:01:32 -- common/autotest_common.sh@10 -- $ set +x 00:06:19.742 ************************************ 00:06:19.742 END TEST make 00:06:19.742 ************************************ 00:06:19.742 10:01:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.742 10:01:32 -- nvmf/common.sh@7 -- # uname -s 00:06:19.742 10:01:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.742 10:01:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.742 10:01:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.742 10:01:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.742 10:01:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.742 10:01:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.742 10:01:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.742 10:01:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.742 10:01:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.742 10:01:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.742 10:01:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:19.742 10:01:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:19.742 10:01:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.742 10:01:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.742 10:01:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.742 10:01:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.742 10:01:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.742 10:01:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.742 10:01:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.742 10:01:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.742 10:01:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.742 10:01:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.742 10:01:32 -- paths/export.sh@5 -- # export PATH 00:06:19.742 10:01:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.742 10:01:32 -- nvmf/common.sh@46 -- # : 0 00:06:19.742 10:01:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:19.742 10:01:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:19.742 10:01:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:19.742 10:01:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.742 10:01:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.742 10:01:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:19.742 10:01:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:19.742 10:01:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:19.742 10:01:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:19.742 10:01:32 -- spdk/autotest.sh@32 -- # uname -s 00:06:19.742 10:01:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:19.742 10:01:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:19.742 10:01:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:19.742 10:01:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:19.742 10:01:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:19.742 10:01:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:19.742 10:01:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:19.742 10:01:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:19.742 10:01:32 -- spdk/autotest.sh@48 -- # udevadm_pid=95490 00:06:19.743 10:01:32 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:19.743 10:01:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:19.743 10:01:32 -- spdk/autotest.sh@54 -- # echo 95492 00:06:19.743 10:01:32 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:19.743 10:01:32 -- spdk/autotest.sh@56 -- # echo 95493 00:06:19.743 10:01:32 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:19.743 10:01:32 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:06:19.743 10:01:32 -- spdk/autotest.sh@60 -- # echo 95494 00:06:19.743 10:01:32 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:06:19.743 10:01:32 -- spdk/autotest.sh@62 -- # echo 95495 00:06:19.743 10:01:32 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:19.743 10:01:32 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:06:19.743 10:01:32 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:06:19.743 10:01:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:19.743 10:01:32 -- common/autotest_common.sh@10 -- # set +x 00:06:19.743 10:01:32 -- spdk/autotest.sh@70 -- # create_test_list 00:06:19.743 10:01:32 -- common/autotest_common.sh@736 -- # xtrace_disable 00:06:19.743 10:01:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.743 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:06:20.002 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:06:20.002 10:01:33 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:20.002 10:01:33 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:20.002 10:01:33 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:20.002 10:01:33 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:20.002 10:01:33 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:20.002 10:01:33 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:06:20.002 10:01:33 -- common/autotest_common.sh@1440 -- # uname 00:06:20.002 10:01:33 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:06:20.002 10:01:33 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:06:20.002 10:01:33 -- common/autotest_common.sh@1460 -- # uname 00:06:20.002 10:01:33 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:06:20.002 10:01:33 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:06:20.002 10:01:33 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:06:20.003 10:01:33 -- spdk/autotest.sh@83 -- # hash lcov 00:06:20.003 10:01:33 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:20.003 10:01:33 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:06:20.003 --rc lcov_branch_coverage=1 00:06:20.003 --rc lcov_function_coverage=1 00:06:20.003 --rc genhtml_branch_coverage=1 00:06:20.003 --rc genhtml_function_coverage=1 00:06:20.003 --rc genhtml_legend=1 00:06:20.003 --rc geninfo_all_blocks=1 00:06:20.003 ' 00:06:20.003 10:01:33 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:06:20.003 --rc lcov_branch_coverage=1 00:06:20.003 --rc lcov_function_coverage=1 00:06:20.003 --rc genhtml_branch_coverage=1 00:06:20.003 --rc genhtml_function_coverage=1 00:06:20.003 --rc genhtml_legend=1 00:06:20.003 --rc geninfo_all_blocks=1 00:06:20.003 ' 00:06:20.003 10:01:33 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:06:20.003 --rc lcov_branch_coverage=1 00:06:20.003 --rc lcov_function_coverage=1 00:06:20.003 --rc genhtml_branch_coverage=1 00:06:20.003 --rc genhtml_function_coverage=1 00:06:20.003 --rc genhtml_legend=1 00:06:20.003 --rc geninfo_all_blocks=1 00:06:20.003 --no-external' 00:06:20.003 10:01:33 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:06:20.003 --rc lcov_branch_coverage=1 00:06:20.003 --rc lcov_function_coverage=1 00:06:20.003 --rc genhtml_branch_coverage=1 00:06:20.003 --rc genhtml_function_coverage=1 00:06:20.003 --rc genhtml_legend=1 00:06:20.003 --rc geninfo_all_blocks=1 00:06:20.003 --no-external' 00:06:20.003 10:01:33 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:20.003 lcov: LCOV version 1.14 00:06:20.003 10:01:33 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:26.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:26.573 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:26.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:26.573 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:26.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:26.573 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:41.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:41.470 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:06:41.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:06:41.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:41.471 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:06:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:41.472 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:06:42.434 10:01:55 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:06:42.434 10:01:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:42.434 10:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.434 10:01:55 -- spdk/autotest.sh@102 -- # rm -f 00:06:42.694 10:01:55 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:45.234 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:06:45.234 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:06:45.234 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:06:45.234 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:06:45.493 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:06:45.752 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:06:45.752 10:01:58 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:06:45.752 10:01:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:06:45.752 10:01:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:06:45.752 10:01:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:06:45.752 10:01:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:45.752 10:01:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:06:45.752 10:01:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:06:45.752 10:01:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:45.752 10:01:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:45.752 10:01:58 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:06:45.752 10:01:58 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:06:45.752 10:01:58 -- spdk/autotest.sh@121 -- # grep -v p 00:06:45.752 10:01:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:06:45.752 10:01:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:06:45.752 10:01:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:06:45.752 10:01:58 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:06:45.752 10:01:58 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:45.752 No valid GPT data, bailing 00:06:45.752 10:01:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:45.752 10:01:58 -- scripts/common.sh@393 -- # pt= 00:06:45.752 10:01:58 -- scripts/common.sh@394 -- # return 1 00:06:45.752 10:01:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:45.752 1+0 records in 00:06:45.752 1+0 records out 00:06:45.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462773 s, 227 MB/s 00:06:45.752 10:01:58 -- spdk/autotest.sh@129 -- # sync 00:06:45.752 10:01:58 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:45.752 10:01:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:45.752 10:01:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:51.029 10:02:03 -- spdk/autotest.sh@135 -- # uname -s 00:06:51.029 10:02:03 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:06:51.029 10:02:03 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:51.029 10:02:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:51.029 10:02:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.029 10:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:51.029 ************************************ 00:06:51.029 START TEST setup.sh 00:06:51.029 ************************************ 00:06:51.029 10:02:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:51.029 * Looking for test storage... 00:06:51.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:51.029 10:02:03 -- setup/test-setup.sh@10 -- # uname -s 00:06:51.029 10:02:03 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:51.029 10:02:03 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:51.029 10:02:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:51.029 10:02:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.029 10:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:51.029 ************************************ 00:06:51.029 START TEST acl 00:06:51.029 ************************************ 00:06:51.029 10:02:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:51.029 * Looking for test storage... 00:06:51.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:51.029 10:02:03 -- setup/acl.sh@10 -- # get_zoned_devs 00:06:51.029 10:02:03 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:06:51.030 10:02:03 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:06:51.030 10:02:03 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:06:51.030 10:02:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:51.030 10:02:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:06:51.030 10:02:03 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:06:51.030 10:02:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:51.030 10:02:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:51.030 10:02:03 -- setup/acl.sh@12 -- # devs=() 00:06:51.030 10:02:03 -- setup/acl.sh@12 -- # declare -a devs 00:06:51.030 10:02:03 -- setup/acl.sh@13 -- # drivers=() 00:06:51.030 10:02:03 -- setup/acl.sh@13 -- # declare -A drivers 00:06:51.030 10:02:03 -- setup/acl.sh@51 -- # setup reset 00:06:51.030 10:02:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:51.030 10:02:03 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:53.567 10:02:06 -- setup/acl.sh@52 -- # collect_setup_devs 00:06:53.567 10:02:06 -- setup/acl.sh@16 -- # local dev driver 00:06:53.567 10:02:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:53.567 10:02:06 -- setup/acl.sh@15 -- # setup output status 00:06:53.567 10:02:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:53.567 10:02:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:56.105 Hugepages 00:06:56.105 node hugesize free / total 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 00:06:56.105 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.105 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:56.105 10:02:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:06:56.105 10:02:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:56.105 10:02:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:56.105 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.106 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.106 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.106 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.106 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.106 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.106 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.106 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.106 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.106 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.106 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.106 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.106 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.106 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.106 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.106 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:56.106 10:02:09 -- setup/acl.sh@20 -- # continue 00:06:56.106 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:56.106 10:02:09 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:56.106 10:02:09 -- setup/acl.sh@54 -- # run_test denied denied 00:06:56.106 10:02:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:56.106 10:02:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.106 10:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:56.106 ************************************ 00:06:56.106 START TEST denied 00:06:56.106 ************************************ 00:06:56.106 10:02:09 -- common/autotest_common.sh@1104 -- # denied 00:06:56.106 10:02:09 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:06:56.106 10:02:09 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:06:56.106 10:02:09 -- setup/acl.sh@38 -- # setup output config 00:06:56.106 10:02:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:56.106 10:02:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:59.396 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:06:59.396 10:02:12 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:06:59.396 10:02:12 -- setup/acl.sh@28 -- # local dev driver 00:06:59.396 10:02:12 -- setup/acl.sh@30 -- # for dev in "$@" 00:06:59.396 10:02:12 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:06:59.396 10:02:12 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:06:59.396 10:02:12 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:59.396 10:02:12 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:59.396 10:02:12 -- setup/acl.sh@41 -- # setup reset 00:06:59.396 10:02:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:59.396 10:02:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:02.683 00:07:02.683 real 0m6.393s 00:07:02.683 user 0m2.039s 00:07:02.683 sys 0m3.596s 00:07:02.683 10:02:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.683 10:02:15 -- common/autotest_common.sh@10 -- # set +x 00:07:02.683 ************************************ 00:07:02.683 END TEST denied 00:07:02.683 ************************************ 00:07:02.683 10:02:15 -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:02.683 10:02:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:02.683 10:02:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.683 10:02:15 -- common/autotest_common.sh@10 -- # set +x 00:07:02.683 ************************************ 00:07:02.683 START TEST allowed 00:07:02.683 ************************************ 00:07:02.683 10:02:15 -- common/autotest_common.sh@1104 -- # allowed 00:07:02.683 10:02:15 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:07:02.683 10:02:15 -- setup/acl.sh@45 -- # setup output config 00:07:02.683 10:02:15 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:07:02.683 10:02:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:02.683 10:02:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:06.875 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:07:06.875 10:02:19 -- setup/acl.sh@47 -- # verify 00:07:06.875 10:02:19 -- setup/acl.sh@28 -- # local dev driver 00:07:06.875 10:02:19 -- setup/acl.sh@48 -- # setup reset 00:07:06.875 10:02:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:06.875 10:02:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:09.411 00:07:09.411 real 0m6.797s 00:07:09.411 user 0m2.210s 00:07:09.411 sys 0m3.751s 00:07:09.411 10:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.411 10:02:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.411 ************************************ 00:07:09.411 END TEST allowed 00:07:09.411 ************************************ 00:07:09.411 00:07:09.411 real 0m18.917s 00:07:09.411 user 0m6.434s 00:07:09.411 sys 0m11.095s 00:07:09.411 10:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.411 10:02:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.411 ************************************ 00:07:09.411 END TEST acl 00:07:09.411 ************************************ 00:07:09.411 10:02:22 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:07:09.411 10:02:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:09.411 10:02:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.411 10:02:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.411 ************************************ 00:07:09.411 START TEST hugepages 00:07:09.411 ************************************ 00:07:09.411 10:02:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:07:09.671 * Looking for test storage... 00:07:09.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:09.671 10:02:22 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:09.671 10:02:22 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:09.671 10:02:22 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:09.671 10:02:22 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:09.671 10:02:22 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:09.671 10:02:22 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:09.671 10:02:22 -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:09.671 10:02:22 -- setup/common.sh@18 -- # local node= 00:07:09.671 10:02:22 -- setup/common.sh@19 -- # local var val 00:07:09.671 10:02:22 -- setup/common.sh@20 -- # local mem_f mem 00:07:09.671 10:02:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:09.671 10:02:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:09.671 10:02:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:09.671 10:02:22 -- setup/common.sh@28 -- # mapfile -t mem 00:07:09.671 10:02:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 169228732 kB' 'MemAvailable: 173044728 kB' 'Buffers: 3972 kB' 'Cached: 13672900 kB' 'SwapCached: 0 kB' 'Active: 10611292 kB' 'Inactive: 3663132 kB' 'Active(anon): 9554152 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600884 kB' 'Mapped: 261664 kB' 'Shmem: 8956600 kB' 'KReclaimable: 479880 kB' 'Slab: 1094404 kB' 'SReclaimable: 479880 kB' 'SUnreclaim: 614524 kB' 'KernelStack: 20528 kB' 'PageTables: 10032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982040 kB' 'Committed_AS: 11063216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315488 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.671 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.671 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # continue 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # IFS=': ' 00:07:09.672 10:02:22 -- setup/common.sh@31 -- # read -r var val _ 00:07:09.672 10:02:22 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:09.672 10:02:22 -- setup/common.sh@33 -- # echo 2048 00:07:09.672 10:02:22 -- setup/common.sh@33 -- # return 0 00:07:09.672 10:02:22 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:09.672 10:02:22 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:09.672 10:02:22 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:09.672 10:02:22 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:09.672 10:02:22 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:09.672 10:02:22 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:09.672 10:02:22 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:09.672 10:02:22 -- setup/hugepages.sh@207 -- # get_nodes 00:07:09.672 10:02:22 -- setup/hugepages.sh@27 -- # local node 00:07:09.672 10:02:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:09.672 10:02:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:09.672 10:02:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:09.672 10:02:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:09.672 10:02:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:09.672 10:02:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:09.672 10:02:22 -- setup/hugepages.sh@208 -- # clear_hp 00:07:09.672 10:02:22 -- setup/hugepages.sh@37 -- # local node hp 00:07:09.672 10:02:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:09.672 10:02:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:09.672 10:02:22 -- setup/hugepages.sh@41 -- # echo 0 00:07:09.672 10:02:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:09.672 10:02:22 -- setup/hugepages.sh@41 -- # echo 0 00:07:09.673 10:02:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:09.673 10:02:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:09.673 10:02:22 -- setup/hugepages.sh@41 -- # echo 0 00:07:09.673 10:02:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:09.673 10:02:22 -- setup/hugepages.sh@41 -- # echo 0 00:07:09.673 10:02:22 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:09.673 10:02:22 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:09.673 10:02:22 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:09.673 10:02:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:09.673 10:02:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.673 10:02:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.673 ************************************ 00:07:09.673 START TEST default_setup 00:07:09.673 ************************************ 00:07:09.673 10:02:22 -- common/autotest_common.sh@1104 -- # default_setup 00:07:09.673 10:02:22 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:09.673 10:02:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:09.673 10:02:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:09.673 10:02:22 -- setup/hugepages.sh@51 -- # shift 00:07:09.673 10:02:22 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:09.673 10:02:22 -- setup/hugepages.sh@52 -- # local node_ids 00:07:09.673 10:02:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:09.673 10:02:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:09.673 10:02:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:09.673 10:02:22 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:09.673 10:02:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:09.673 10:02:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:09.673 10:02:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:09.673 10:02:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:09.673 10:02:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:09.673 10:02:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:09.673 10:02:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:09.673 10:02:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:09.673 10:02:22 -- setup/hugepages.sh@73 -- # return 0 00:07:09.673 10:02:22 -- setup/hugepages.sh@137 -- # setup output 00:07:09.673 10:02:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:09.673 10:02:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:12.204 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:12.204 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:12.463 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:12.463 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:12.463 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:12.463 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:12.463 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:13.403 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:07:13.403 10:02:26 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:13.403 10:02:26 -- setup/hugepages.sh@89 -- # local node 00:07:13.403 10:02:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:13.403 10:02:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:13.403 10:02:26 -- setup/hugepages.sh@92 -- # local surp 00:07:13.403 10:02:26 -- setup/hugepages.sh@93 -- # local resv 00:07:13.403 10:02:26 -- setup/hugepages.sh@94 -- # local anon 00:07:13.403 10:02:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:13.403 10:02:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:13.403 10:02:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:13.403 10:02:26 -- setup/common.sh@18 -- # local node= 00:07:13.403 10:02:26 -- setup/common.sh@19 -- # local var val 00:07:13.403 10:02:26 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.403 10:02:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.403 10:02:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.403 10:02:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.403 10:02:26 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.403 10:02:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171411392 kB' 'MemAvailable: 175227328 kB' 'Buffers: 3972 kB' 'Cached: 13673004 kB' 'SwapCached: 0 kB' 'Active: 10629404 kB' 'Inactive: 3663132 kB' 'Active(anon): 9572264 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618992 kB' 'Mapped: 261448 kB' 'Shmem: 8956704 kB' 'KReclaimable: 479760 kB' 'Slab: 1093064 kB' 'SReclaimable: 479760 kB' 'SUnreclaim: 613304 kB' 'KernelStack: 21136 kB' 'PageTables: 11128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11083996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315648 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.403 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.403 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.404 10:02:26 -- setup/common.sh@33 -- # echo 0 00:07:13.404 10:02:26 -- setup/common.sh@33 -- # return 0 00:07:13.404 10:02:26 -- setup/hugepages.sh@97 -- # anon=0 00:07:13.404 10:02:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:13.404 10:02:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.404 10:02:26 -- setup/common.sh@18 -- # local node= 00:07:13.404 10:02:26 -- setup/common.sh@19 -- # local var val 00:07:13.404 10:02:26 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.404 10:02:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.404 10:02:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.404 10:02:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.404 10:02:26 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.404 10:02:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171415296 kB' 'MemAvailable: 175231232 kB' 'Buffers: 3972 kB' 'Cached: 13673008 kB' 'SwapCached: 0 kB' 'Active: 10629148 kB' 'Inactive: 3663132 kB' 'Active(anon): 9572008 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618652 kB' 'Mapped: 261608 kB' 'Shmem: 8956708 kB' 'KReclaimable: 479760 kB' 'Slab: 1093252 kB' 'SReclaimable: 479760 kB' 'SUnreclaim: 613492 kB' 'KernelStack: 20768 kB' 'PageTables: 10500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11082616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315632 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.404 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.404 10:02:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.405 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.405 10:02:26 -- setup/common.sh@33 -- # echo 0 00:07:13.405 10:02:26 -- setup/common.sh@33 -- # return 0 00:07:13.405 10:02:26 -- setup/hugepages.sh@99 -- # surp=0 00:07:13.405 10:02:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:13.405 10:02:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:13.405 10:02:26 -- setup/common.sh@18 -- # local node= 00:07:13.405 10:02:26 -- setup/common.sh@19 -- # local var val 00:07:13.405 10:02:26 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.405 10:02:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.405 10:02:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.405 10:02:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.405 10:02:26 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.405 10:02:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.405 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171414384 kB' 'MemAvailable: 175230320 kB' 'Buffers: 3972 kB' 'Cached: 13673020 kB' 'SwapCached: 0 kB' 'Active: 10628056 kB' 'Inactive: 3663132 kB' 'Active(anon): 9570916 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617400 kB' 'Mapped: 261556 kB' 'Shmem: 8956720 kB' 'KReclaimable: 479760 kB' 'Slab: 1093252 kB' 'SReclaimable: 479760 kB' 'SUnreclaim: 613492 kB' 'KernelStack: 20736 kB' 'PageTables: 10360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11082632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315632 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.406 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.406 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.407 10:02:26 -- setup/common.sh@33 -- # echo 0 00:07:13.407 10:02:26 -- setup/common.sh@33 -- # return 0 00:07:13.407 10:02:26 -- setup/hugepages.sh@100 -- # resv=0 00:07:13.407 10:02:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:13.407 nr_hugepages=1024 00:07:13.407 10:02:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:13.407 resv_hugepages=0 00:07:13.407 10:02:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:13.407 surplus_hugepages=0 00:07:13.407 10:02:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:13.407 anon_hugepages=0 00:07:13.407 10:02:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:13.407 10:02:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:13.407 10:02:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:13.407 10:02:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:13.407 10:02:26 -- setup/common.sh@18 -- # local node= 00:07:13.407 10:02:26 -- setup/common.sh@19 -- # local var val 00:07:13.407 10:02:26 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.407 10:02:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.407 10:02:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.407 10:02:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.407 10:02:26 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.407 10:02:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171411560 kB' 'MemAvailable: 175227496 kB' 'Buffers: 3972 kB' 'Cached: 13673032 kB' 'SwapCached: 0 kB' 'Active: 10628756 kB' 'Inactive: 3663132 kB' 'Active(anon): 9571616 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618132 kB' 'Mapped: 261468 kB' 'Shmem: 8956732 kB' 'KReclaimable: 479760 kB' 'Slab: 1093212 kB' 'SReclaimable: 479760 kB' 'SUnreclaim: 613452 kB' 'KernelStack: 20800 kB' 'PageTables: 10336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11083792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315760 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.407 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.407 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.408 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.408 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.408 10:02:26 -- setup/common.sh@33 -- # echo 1024 00:07:13.408 10:02:26 -- setup/common.sh@33 -- # return 0 00:07:13.408 10:02:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:13.408 10:02:26 -- setup/hugepages.sh@112 -- # get_nodes 00:07:13.408 10:02:26 -- setup/hugepages.sh@27 -- # local node 00:07:13.408 10:02:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:13.408 10:02:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:13.409 10:02:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:13.409 10:02:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:13.409 10:02:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:13.409 10:02:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:13.409 10:02:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:13.409 10:02:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:13.409 10:02:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:13.409 10:02:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.409 10:02:26 -- setup/common.sh@18 -- # local node=0 00:07:13.409 10:02:26 -- setup/common.sh@19 -- # local var val 00:07:13.409 10:02:26 -- setup/common.sh@20 -- # local mem_f mem 00:07:13.409 10:02:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.409 10:02:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:13.409 10:02:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:13.409 10:02:26 -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.409 10:02:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91517488 kB' 'MemUsed: 6098140 kB' 'SwapCached: 0 kB' 'Active: 2586960 kB' 'Inactive: 134256 kB' 'Active(anon): 2144648 kB' 'Inactive(anon): 0 kB' 'Active(file): 442312 kB' 'Inactive(file): 134256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2281088 kB' 'Mapped: 91840 kB' 'AnonPages: 443252 kB' 'Shmem: 1704520 kB' 'KernelStack: 12024 kB' 'PageTables: 5560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255256 kB' 'Slab: 543936 kB' 'SReclaimable: 255256 kB' 'SUnreclaim: 288680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.409 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.409 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.410 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.410 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.410 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.410 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.410 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.410 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.410 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.410 10:02:26 -- setup/common.sh@32 -- # continue 00:07:13.410 10:02:26 -- setup/common.sh@31 -- # IFS=': ' 00:07:13.410 10:02:26 -- setup/common.sh@31 -- # read -r var val _ 00:07:13.410 10:02:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.410 10:02:26 -- setup/common.sh@33 -- # echo 0 00:07:13.410 10:02:26 -- setup/common.sh@33 -- # return 0 00:07:13.410 10:02:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:13.410 10:02:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:13.410 10:02:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:13.410 10:02:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:13.410 10:02:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:13.410 node0=1024 expecting 1024 00:07:13.410 10:02:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:13.410 00:07:13.410 real 0m3.794s 00:07:13.410 user 0m1.166s 00:07:13.410 sys 0m1.848s 00:07:13.410 10:02:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.410 10:02:26 -- common/autotest_common.sh@10 -- # set +x 00:07:13.410 ************************************ 00:07:13.410 END TEST default_setup 00:07:13.410 ************************************ 00:07:13.410 10:02:26 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:13.410 10:02:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.410 10:02:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.410 10:02:26 -- common/autotest_common.sh@10 -- # set +x 00:07:13.410 ************************************ 00:07:13.410 START TEST per_node_1G_alloc 00:07:13.410 ************************************ 00:07:13.410 10:02:26 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:07:13.410 10:02:26 -- setup/hugepages.sh@143 -- # local IFS=, 00:07:13.410 10:02:26 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:07:13.410 10:02:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:07:13.410 10:02:26 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:07:13.410 10:02:26 -- setup/hugepages.sh@51 -- # shift 00:07:13.410 10:02:26 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:07:13.410 10:02:26 -- setup/hugepages.sh@52 -- # local node_ids 00:07:13.410 10:02:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:13.410 10:02:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:13.410 10:02:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:07:13.410 10:02:26 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:07:13.410 10:02:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:13.410 10:02:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:13.410 10:02:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:13.410 10:02:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:13.410 10:02:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:13.410 10:02:26 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:07:13.410 10:02:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:13.410 10:02:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:13.410 10:02:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:13.410 10:02:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:13.410 10:02:26 -- setup/hugepages.sh@73 -- # return 0 00:07:13.410 10:02:26 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:13.410 10:02:26 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:07:13.410 10:02:26 -- setup/hugepages.sh@146 -- # setup output 00:07:13.410 10:02:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:13.410 10:02:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:15.941 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:15.941 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:15.941 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:15.941 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:16.203 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:16.203 10:02:29 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:07:16.203 10:02:29 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:16.203 10:02:29 -- setup/hugepages.sh@89 -- # local node 00:07:16.203 10:02:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:16.203 10:02:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:16.203 10:02:29 -- setup/hugepages.sh@92 -- # local surp 00:07:16.203 10:02:29 -- setup/hugepages.sh@93 -- # local resv 00:07:16.203 10:02:29 -- setup/hugepages.sh@94 -- # local anon 00:07:16.203 10:02:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:16.203 10:02:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:16.203 10:02:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:16.203 10:02:29 -- setup/common.sh@18 -- # local node= 00:07:16.203 10:02:29 -- setup/common.sh@19 -- # local var val 00:07:16.203 10:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:07:16.203 10:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.203 10:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.203 10:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.203 10:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.203 10:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.203 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.203 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.203 10:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171394228 kB' 'MemAvailable: 175210164 kB' 'Buffers: 3972 kB' 'Cached: 13673120 kB' 'SwapCached: 0 kB' 'Active: 10629532 kB' 'Inactive: 3663132 kB' 'Active(anon): 9572392 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618720 kB' 'Mapped: 261484 kB' 'Shmem: 8956820 kB' 'KReclaimable: 479760 kB' 'Slab: 1092720 kB' 'SReclaimable: 479760 kB' 'SUnreclaim: 612960 kB' 'KernelStack: 20816 kB' 'PageTables: 10528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11084628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315856 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:16.203 10:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.203 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.203 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.203 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.203 10:02:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.203 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.203 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.204 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.204 10:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.204 10:02:29 -- setup/common.sh@33 -- # echo 0 00:07:16.204 10:02:29 -- setup/common.sh@33 -- # return 0 00:07:16.204 10:02:29 -- setup/hugepages.sh@97 -- # anon=0 00:07:16.204 10:02:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:16.204 10:02:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:16.204 10:02:29 -- setup/common.sh@18 -- # local node= 00:07:16.204 10:02:29 -- setup/common.sh@19 -- # local var val 00:07:16.205 10:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:07:16.205 10:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.205 10:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.205 10:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.205 10:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.205 10:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171401208 kB' 'MemAvailable: 175217144 kB' 'Buffers: 3972 kB' 'Cached: 13673120 kB' 'SwapCached: 0 kB' 'Active: 10629504 kB' 'Inactive: 3663132 kB' 'Active(anon): 9572364 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618764 kB' 'Mapped: 261476 kB' 'Shmem: 8956820 kB' 'KReclaimable: 479760 kB' 'Slab: 1092712 kB' 'SReclaimable: 479760 kB' 'SUnreclaim: 612952 kB' 'KernelStack: 20880 kB' 'PageTables: 10612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11083248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315744 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.205 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.205 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.206 10:02:29 -- setup/common.sh@33 -- # echo 0 00:07:16.206 10:02:29 -- setup/common.sh@33 -- # return 0 00:07:16.206 10:02:29 -- setup/hugepages.sh@99 -- # surp=0 00:07:16.206 10:02:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:16.206 10:02:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:16.206 10:02:29 -- setup/common.sh@18 -- # local node= 00:07:16.206 10:02:29 -- setup/common.sh@19 -- # local var val 00:07:16.206 10:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:07:16.206 10:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.206 10:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.206 10:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.206 10:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.206 10:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171401592 kB' 'MemAvailable: 175217528 kB' 'Buffers: 3972 kB' 'Cached: 13673132 kB' 'SwapCached: 0 kB' 'Active: 10628644 kB' 'Inactive: 3663132 kB' 'Active(anon): 9571504 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617908 kB' 'Mapped: 261476 kB' 'Shmem: 8956832 kB' 'KReclaimable: 479760 kB' 'Slab: 1092732 kB' 'SReclaimable: 479760 kB' 'SUnreclaim: 612972 kB' 'KernelStack: 20560 kB' 'PageTables: 9760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11080476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315664 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.206 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.206 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.207 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.207 10:02:29 -- setup/common.sh@33 -- # echo 0 00:07:16.207 10:02:29 -- setup/common.sh@33 -- # return 0 00:07:16.207 10:02:29 -- setup/hugepages.sh@100 -- # resv=0 00:07:16.207 10:02:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:16.207 nr_hugepages=1024 00:07:16.207 10:02:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:16.207 resv_hugepages=0 00:07:16.207 10:02:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:16.207 surplus_hugepages=0 00:07:16.207 10:02:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:16.207 anon_hugepages=0 00:07:16.207 10:02:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:16.207 10:02:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:16.207 10:02:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:16.207 10:02:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:16.207 10:02:29 -- setup/common.sh@18 -- # local node= 00:07:16.207 10:02:29 -- setup/common.sh@19 -- # local var val 00:07:16.207 10:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:07:16.207 10:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.207 10:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.207 10:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.207 10:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.207 10:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.207 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171401592 kB' 'MemAvailable: 175217528 kB' 'Buffers: 3972 kB' 'Cached: 13673148 kB' 'SwapCached: 0 kB' 'Active: 10628240 kB' 'Inactive: 3663132 kB' 'Active(anon): 9571100 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617504 kB' 'Mapped: 261476 kB' 'Shmem: 8956848 kB' 'KReclaimable: 479760 kB' 'Slab: 1092856 kB' 'SReclaimable: 479760 kB' 'SUnreclaim: 613096 kB' 'KernelStack: 20608 kB' 'PageTables: 10120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11080492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315664 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.208 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.208 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.469 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.469 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.470 10:02:29 -- setup/common.sh@33 -- # echo 1024 00:07:16.470 10:02:29 -- setup/common.sh@33 -- # return 0 00:07:16.470 10:02:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:16.470 10:02:29 -- setup/hugepages.sh@112 -- # get_nodes 00:07:16.470 10:02:29 -- setup/hugepages.sh@27 -- # local node 00:07:16.470 10:02:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:16.470 10:02:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:16.470 10:02:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:16.470 10:02:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:16.470 10:02:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:16.470 10:02:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:16.470 10:02:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:16.470 10:02:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:16.470 10:02:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:16.470 10:02:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:16.470 10:02:29 -- setup/common.sh@18 -- # local node=0 00:07:16.470 10:02:29 -- setup/common.sh@19 -- # local var val 00:07:16.470 10:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:07:16.470 10:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.470 10:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:16.470 10:02:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:16.470 10:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.470 10:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92560860 kB' 'MemUsed: 5054768 kB' 'SwapCached: 0 kB' 'Active: 2584692 kB' 'Inactive: 134256 kB' 'Active(anon): 2142380 kB' 'Inactive(anon): 0 kB' 'Active(file): 442312 kB' 'Inactive(file): 134256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2281172 kB' 'Mapped: 91336 kB' 'AnonPages: 440920 kB' 'Shmem: 1704604 kB' 'KernelStack: 12008 kB' 'PageTables: 5468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255256 kB' 'Slab: 543736 kB' 'SReclaimable: 255256 kB' 'SUnreclaim: 288480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.470 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.470 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@33 -- # echo 0 00:07:16.471 10:02:29 -- setup/common.sh@33 -- # return 0 00:07:16.471 10:02:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:16.471 10:02:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:16.471 10:02:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:16.471 10:02:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:16.471 10:02:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:16.471 10:02:29 -- setup/common.sh@18 -- # local node=1 00:07:16.471 10:02:29 -- setup/common.sh@19 -- # local var val 00:07:16.471 10:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:07:16.471 10:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.471 10:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:16.471 10:02:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:16.471 10:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.471 10:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 78841412 kB' 'MemUsed: 14924140 kB' 'SwapCached: 0 kB' 'Active: 8043364 kB' 'Inactive: 3528876 kB' 'Active(anon): 7428536 kB' 'Inactive(anon): 0 kB' 'Active(file): 614828 kB' 'Inactive(file): 3528876 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11395964 kB' 'Mapped: 170140 kB' 'AnonPages: 176356 kB' 'Shmem: 7252260 kB' 'KernelStack: 8584 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 224504 kB' 'Slab: 549120 kB' 'SReclaimable: 224504 kB' 'SUnreclaim: 324616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.471 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.471 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # continue 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:07:16.472 10:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:07:16.472 10:02:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.472 10:02:29 -- setup/common.sh@33 -- # echo 0 00:07:16.472 10:02:29 -- setup/common.sh@33 -- # return 0 00:07:16.472 10:02:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:16.472 10:02:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:16.472 10:02:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:16.472 10:02:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:16.472 10:02:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:16.472 node0=512 expecting 512 00:07:16.472 10:02:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:16.472 10:02:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:16.472 10:02:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:16.472 10:02:29 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:07:16.472 node1=512 expecting 512 00:07:16.472 10:02:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:16.472 00:07:16.472 real 0m2.924s 00:07:16.472 user 0m1.194s 00:07:16.472 sys 0m1.796s 00:07:16.472 10:02:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.472 10:02:29 -- common/autotest_common.sh@10 -- # set +x 00:07:16.472 ************************************ 00:07:16.472 END TEST per_node_1G_alloc 00:07:16.472 ************************************ 00:07:16.472 10:02:29 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:16.472 10:02:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.472 10:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.472 10:02:29 -- common/autotest_common.sh@10 -- # set +x 00:07:16.472 ************************************ 00:07:16.472 START TEST even_2G_alloc 00:07:16.472 ************************************ 00:07:16.472 10:02:29 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:07:16.472 10:02:29 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:16.472 10:02:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:16.472 10:02:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:16.472 10:02:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:16.472 10:02:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:16.472 10:02:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:16.472 10:02:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:16.472 10:02:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:16.472 10:02:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:16.472 10:02:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:16.472 10:02:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:16.472 10:02:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:16.472 10:02:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:16.472 10:02:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:16.472 10:02:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:16.472 10:02:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:16.472 10:02:29 -- setup/hugepages.sh@83 -- # : 512 00:07:16.472 10:02:29 -- setup/hugepages.sh@84 -- # : 1 00:07:16.472 10:02:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:16.472 10:02:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:16.472 10:02:29 -- setup/hugepages.sh@83 -- # : 0 00:07:16.472 10:02:29 -- setup/hugepages.sh@84 -- # : 0 00:07:16.472 10:02:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:16.472 10:02:29 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:16.472 10:02:29 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:16.472 10:02:29 -- setup/hugepages.sh@153 -- # setup output 00:07:16.472 10:02:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:16.472 10:02:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:19.009 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:19.009 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:19.009 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:19.009 10:02:32 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:19.009 10:02:32 -- setup/hugepages.sh@89 -- # local node 00:07:19.009 10:02:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:19.009 10:02:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:19.009 10:02:32 -- setup/hugepages.sh@92 -- # local surp 00:07:19.009 10:02:32 -- setup/hugepages.sh@93 -- # local resv 00:07:19.009 10:02:32 -- setup/hugepages.sh@94 -- # local anon 00:07:19.009 10:02:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:19.009 10:02:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:19.009 10:02:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:19.009 10:02:32 -- setup/common.sh@18 -- # local node= 00:07:19.009 10:02:32 -- setup/common.sh@19 -- # local var val 00:07:19.009 10:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:07:19.009 10:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.009 10:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:19.009 10:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:19.009 10:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.009 10:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171405172 kB' 'MemAvailable: 175221088 kB' 'Buffers: 3972 kB' 'Cached: 13673232 kB' 'SwapCached: 0 kB' 'Active: 10626520 kB' 'Inactive: 3663132 kB' 'Active(anon): 9569380 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615712 kB' 'Mapped: 260400 kB' 'Shmem: 8956932 kB' 'KReclaimable: 479720 kB' 'Slab: 1092776 kB' 'SReclaimable: 479720 kB' 'SUnreclaim: 613056 kB' 'KernelStack: 20512 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11069492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315632 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.009 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.009 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:19.010 10:02:32 -- setup/common.sh@33 -- # echo 0 00:07:19.010 10:02:32 -- setup/common.sh@33 -- # return 0 00:07:19.010 10:02:32 -- setup/hugepages.sh@97 -- # anon=0 00:07:19.010 10:02:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:19.010 10:02:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:19.010 10:02:32 -- setup/common.sh@18 -- # local node= 00:07:19.010 10:02:32 -- setup/common.sh@19 -- # local var val 00:07:19.010 10:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:07:19.010 10:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.010 10:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:19.010 10:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:19.010 10:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.010 10:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171405980 kB' 'MemAvailable: 175221896 kB' 'Buffers: 3972 kB' 'Cached: 13673236 kB' 'SwapCached: 0 kB' 'Active: 10626276 kB' 'Inactive: 3663132 kB' 'Active(anon): 9569136 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615484 kB' 'Mapped: 260332 kB' 'Shmem: 8956936 kB' 'KReclaimable: 479720 kB' 'Slab: 1092780 kB' 'SReclaimable: 479720 kB' 'SUnreclaim: 613060 kB' 'KernelStack: 20496 kB' 'PageTables: 9688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11069504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.010 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.010 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.293 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.293 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.294 10:02:32 -- setup/common.sh@33 -- # echo 0 00:07:19.294 10:02:32 -- setup/common.sh@33 -- # return 0 00:07:19.294 10:02:32 -- setup/hugepages.sh@99 -- # surp=0 00:07:19.294 10:02:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:19.294 10:02:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:19.294 10:02:32 -- setup/common.sh@18 -- # local node= 00:07:19.294 10:02:32 -- setup/common.sh@19 -- # local var val 00:07:19.294 10:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:07:19.294 10:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.294 10:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:19.294 10:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:19.294 10:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.294 10:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.294 10:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171406048 kB' 'MemAvailable: 175221964 kB' 'Buffers: 3972 kB' 'Cached: 13673248 kB' 'SwapCached: 0 kB' 'Active: 10626300 kB' 'Inactive: 3663132 kB' 'Active(anon): 9569160 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615476 kB' 'Mapped: 260332 kB' 'Shmem: 8956948 kB' 'KReclaimable: 479720 kB' 'Slab: 1092780 kB' 'SReclaimable: 479720 kB' 'SUnreclaim: 613060 kB' 'KernelStack: 20496 kB' 'PageTables: 9688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11069520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.294 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.294 10:02:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.295 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.295 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:19.296 10:02:32 -- setup/common.sh@33 -- # echo 0 00:07:19.296 10:02:32 -- setup/common.sh@33 -- # return 0 00:07:19.296 10:02:32 -- setup/hugepages.sh@100 -- # resv=0 00:07:19.296 10:02:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:19.296 nr_hugepages=1024 00:07:19.296 10:02:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:19.296 resv_hugepages=0 00:07:19.296 10:02:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:19.296 surplus_hugepages=0 00:07:19.296 10:02:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:19.296 anon_hugepages=0 00:07:19.296 10:02:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:19.296 10:02:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:19.296 10:02:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:19.296 10:02:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:19.296 10:02:32 -- setup/common.sh@18 -- # local node= 00:07:19.296 10:02:32 -- setup/common.sh@19 -- # local var val 00:07:19.296 10:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:07:19.296 10:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.296 10:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:19.296 10:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:19.296 10:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.296 10:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171406048 kB' 'MemAvailable: 175221964 kB' 'Buffers: 3972 kB' 'Cached: 13673260 kB' 'SwapCached: 0 kB' 'Active: 10626376 kB' 'Inactive: 3663132 kB' 'Active(anon): 9569236 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615004 kB' 'Mapped: 260332 kB' 'Shmem: 8956960 kB' 'KReclaimable: 479720 kB' 'Slab: 1092780 kB' 'SReclaimable: 479720 kB' 'SUnreclaim: 613060 kB' 'KernelStack: 20480 kB' 'PageTables: 9640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11069532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.296 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.296 10:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:19.297 10:02:32 -- setup/common.sh@33 -- # echo 1024 00:07:19.297 10:02:32 -- setup/common.sh@33 -- # return 0 00:07:19.297 10:02:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:19.297 10:02:32 -- setup/hugepages.sh@112 -- # get_nodes 00:07:19.297 10:02:32 -- setup/hugepages.sh@27 -- # local node 00:07:19.297 10:02:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:19.297 10:02:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:19.297 10:02:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:19.297 10:02:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:19.297 10:02:32 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:19.297 10:02:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:19.297 10:02:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:19.297 10:02:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:19.297 10:02:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:19.297 10:02:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:19.297 10:02:32 -- setup/common.sh@18 -- # local node=0 00:07:19.297 10:02:32 -- setup/common.sh@19 -- # local var val 00:07:19.297 10:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:07:19.297 10:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.297 10:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:19.297 10:02:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:19.297 10:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.297 10:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92558320 kB' 'MemUsed: 5057308 kB' 'SwapCached: 0 kB' 'Active: 2584032 kB' 'Inactive: 134256 kB' 'Active(anon): 2141720 kB' 'Inactive(anon): 0 kB' 'Active(file): 442312 kB' 'Inactive(file): 134256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2281256 kB' 'Mapped: 90184 kB' 'AnonPages: 440336 kB' 'Shmem: 1704688 kB' 'KernelStack: 12056 kB' 'PageTables: 5568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255216 kB' 'Slab: 543856 kB' 'SReclaimable: 255216 kB' 'SUnreclaim: 288640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.297 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.297 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.298 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.298 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@33 -- # echo 0 00:07:19.299 10:02:32 -- setup/common.sh@33 -- # return 0 00:07:19.299 10:02:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:19.299 10:02:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:19.299 10:02:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:19.299 10:02:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:19.299 10:02:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:19.299 10:02:32 -- setup/common.sh@18 -- # local node=1 00:07:19.299 10:02:32 -- setup/common.sh@19 -- # local var val 00:07:19.299 10:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:07:19.299 10:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:19.299 10:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:19.299 10:02:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:19.299 10:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:07:19.299 10:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 78848224 kB' 'MemUsed: 14917328 kB' 'SwapCached: 0 kB' 'Active: 8042248 kB' 'Inactive: 3528876 kB' 'Active(anon): 7427420 kB' 'Inactive(anon): 0 kB' 'Active(file): 614828 kB' 'Inactive(file): 3528876 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11395992 kB' 'Mapped: 170148 kB' 'AnonPages: 175176 kB' 'Shmem: 7252288 kB' 'KernelStack: 8440 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 224504 kB' 'Slab: 548924 kB' 'SReclaimable: 224504 kB' 'SUnreclaim: 324420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.299 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.299 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # continue 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:07:19.300 10:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:07:19.300 10:02:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:19.300 10:02:32 -- setup/common.sh@33 -- # echo 0 00:07:19.300 10:02:32 -- setup/common.sh@33 -- # return 0 00:07:19.300 10:02:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:19.300 10:02:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:19.300 10:02:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:19.300 10:02:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:19.300 10:02:32 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:19.300 node0=512 expecting 512 00:07:19.300 10:02:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:19.300 10:02:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:19.300 10:02:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:19.300 10:02:32 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:07:19.300 node1=512 expecting 512 00:07:19.300 10:02:32 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:19.300 00:07:19.300 real 0m2.831s 00:07:19.300 user 0m1.182s 00:07:19.300 sys 0m1.711s 00:07:19.300 10:02:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.300 10:02:32 -- common/autotest_common.sh@10 -- # set +x 00:07:19.300 ************************************ 00:07:19.300 END TEST even_2G_alloc 00:07:19.300 ************************************ 00:07:19.300 10:02:32 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:19.300 10:02:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.300 10:02:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.300 10:02:32 -- common/autotest_common.sh@10 -- # set +x 00:07:19.300 ************************************ 00:07:19.300 START TEST odd_alloc 00:07:19.300 ************************************ 00:07:19.300 10:02:32 -- common/autotest_common.sh@1104 -- # odd_alloc 00:07:19.300 10:02:32 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:19.300 10:02:32 -- setup/hugepages.sh@49 -- # local size=2098176 00:07:19.300 10:02:32 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:19.300 10:02:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:19.300 10:02:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:19.300 10:02:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:19.300 10:02:32 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:19.300 10:02:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:19.300 10:02:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:19.300 10:02:32 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:19.300 10:02:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:19.300 10:02:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:19.300 10:02:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:19.300 10:02:32 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:19.300 10:02:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:19.300 10:02:32 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:19.300 10:02:32 -- setup/hugepages.sh@83 -- # : 513 00:07:19.300 10:02:32 -- setup/hugepages.sh@84 -- # : 1 00:07:19.300 10:02:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:19.300 10:02:32 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:07:19.300 10:02:32 -- setup/hugepages.sh@83 -- # : 0 00:07:19.300 10:02:32 -- setup/hugepages.sh@84 -- # : 0 00:07:19.300 10:02:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:19.300 10:02:32 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:19.300 10:02:32 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:19.300 10:02:32 -- setup/hugepages.sh@160 -- # setup output 00:07:19.300 10:02:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:19.300 10:02:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:21.888 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:21.888 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:21.888 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:21.888 10:02:35 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:21.888 10:02:35 -- setup/hugepages.sh@89 -- # local node 00:07:21.888 10:02:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:21.888 10:02:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:21.888 10:02:35 -- setup/hugepages.sh@92 -- # local surp 00:07:21.888 10:02:35 -- setup/hugepages.sh@93 -- # local resv 00:07:21.888 10:02:35 -- setup/hugepages.sh@94 -- # local anon 00:07:21.888 10:02:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:21.888 10:02:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:21.888 10:02:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:21.888 10:02:35 -- setup/common.sh@18 -- # local node= 00:07:21.888 10:02:35 -- setup/common.sh@19 -- # local var val 00:07:21.888 10:02:35 -- setup/common.sh@20 -- # local mem_f mem 00:07:21.888 10:02:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.888 10:02:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:21.888 10:02:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:21.888 10:02:35 -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.888 10:02:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.888 10:02:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171423200 kB' 'MemAvailable: 175239116 kB' 'Buffers: 3972 kB' 'Cached: 13673352 kB' 'SwapCached: 0 kB' 'Active: 10627052 kB' 'Inactive: 3663132 kB' 'Active(anon): 9569912 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616160 kB' 'Mapped: 260384 kB' 'Shmem: 8957052 kB' 'KReclaimable: 479720 kB' 'Slab: 1092736 kB' 'SReclaimable: 479720 kB' 'SUnreclaim: 613016 kB' 'KernelStack: 20512 kB' 'PageTables: 9736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11070132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315616 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.888 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.888 10:02:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.889 10:02:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:21.889 10:02:35 -- setup/common.sh@33 -- # echo 0 00:07:21.889 10:02:35 -- setup/common.sh@33 -- # return 0 00:07:21.889 10:02:35 -- setup/hugepages.sh@97 -- # anon=0 00:07:21.889 10:02:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:21.889 10:02:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:21.889 10:02:35 -- setup/common.sh@18 -- # local node= 00:07:21.889 10:02:35 -- setup/common.sh@19 -- # local var val 00:07:21.889 10:02:35 -- setup/common.sh@20 -- # local mem_f mem 00:07:21.889 10:02:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.889 10:02:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:21.889 10:02:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:21.889 10:02:35 -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.889 10:02:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.889 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171425888 kB' 'MemAvailable: 175241800 kB' 'Buffers: 3972 kB' 'Cached: 13673356 kB' 'SwapCached: 0 kB' 'Active: 10627236 kB' 'Inactive: 3663132 kB' 'Active(anon): 9570096 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616344 kB' 'Mapped: 260340 kB' 'Shmem: 8957056 kB' 'KReclaimable: 479712 kB' 'Slab: 1092732 kB' 'SReclaimable: 479712 kB' 'SUnreclaim: 613020 kB' 'KernelStack: 20512 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11074024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.890 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.890 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.891 10:02:35 -- setup/common.sh@33 -- # echo 0 00:07:21.891 10:02:35 -- setup/common.sh@33 -- # return 0 00:07:21.891 10:02:35 -- setup/hugepages.sh@99 -- # surp=0 00:07:21.891 10:02:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:21.891 10:02:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:21.891 10:02:35 -- setup/common.sh@18 -- # local node= 00:07:21.891 10:02:35 -- setup/common.sh@19 -- # local var val 00:07:21.891 10:02:35 -- setup/common.sh@20 -- # local mem_f mem 00:07:21.891 10:02:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.891 10:02:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:21.891 10:02:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:21.891 10:02:35 -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.891 10:02:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171427336 kB' 'MemAvailable: 175243248 kB' 'Buffers: 3972 kB' 'Cached: 13673368 kB' 'SwapCached: 0 kB' 'Active: 10627100 kB' 'Inactive: 3663132 kB' 'Active(anon): 9569960 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616192 kB' 'Mapped: 260340 kB' 'Shmem: 8957068 kB' 'KReclaimable: 479712 kB' 'Slab: 1092732 kB' 'SReclaimable: 479712 kB' 'SUnreclaim: 613020 kB' 'KernelStack: 20464 kB' 'PageTables: 9596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11070160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315552 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.891 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.891 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:21.892 10:02:35 -- setup/common.sh@33 -- # echo 0 00:07:21.892 10:02:35 -- setup/common.sh@33 -- # return 0 00:07:21.892 10:02:35 -- setup/hugepages.sh@100 -- # resv=0 00:07:21.892 10:02:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:21.892 nr_hugepages=1025 00:07:21.892 10:02:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:21.892 resv_hugepages=0 00:07:21.892 10:02:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:21.892 surplus_hugepages=0 00:07:21.892 10:02:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:21.892 anon_hugepages=0 00:07:21.892 10:02:35 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:21.892 10:02:35 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:21.892 10:02:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:21.892 10:02:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:21.892 10:02:35 -- setup/common.sh@18 -- # local node= 00:07:21.892 10:02:35 -- setup/common.sh@19 -- # local var val 00:07:21.892 10:02:35 -- setup/common.sh@20 -- # local mem_f mem 00:07:21.892 10:02:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.892 10:02:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:21.892 10:02:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:21.892 10:02:35 -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.892 10:02:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.892 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.892 10:02:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171427336 kB' 'MemAvailable: 175243248 kB' 'Buffers: 3972 kB' 'Cached: 13673380 kB' 'SwapCached: 0 kB' 'Active: 10627376 kB' 'Inactive: 3663132 kB' 'Active(anon): 9570236 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616464 kB' 'Mapped: 260340 kB' 'Shmem: 8957080 kB' 'KReclaimable: 479712 kB' 'Slab: 1092732 kB' 'SReclaimable: 479712 kB' 'SUnreclaim: 613020 kB' 'KernelStack: 20512 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11070176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315552 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.892 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.893 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.893 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:21.894 10:02:35 -- setup/common.sh@33 -- # echo 1025 00:07:21.894 10:02:35 -- setup/common.sh@33 -- # return 0 00:07:21.894 10:02:35 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:21.894 10:02:35 -- setup/hugepages.sh@112 -- # get_nodes 00:07:21.894 10:02:35 -- setup/hugepages.sh@27 -- # local node 00:07:21.894 10:02:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:21.894 10:02:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:21.894 10:02:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:21.894 10:02:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:07:21.894 10:02:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:21.894 10:02:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:21.894 10:02:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:21.894 10:02:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:21.894 10:02:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:21.894 10:02:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:21.894 10:02:35 -- setup/common.sh@18 -- # local node=0 00:07:21.894 10:02:35 -- setup/common.sh@19 -- # local var val 00:07:21.894 10:02:35 -- setup/common.sh@20 -- # local mem_f mem 00:07:21.894 10:02:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.894 10:02:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:21.894 10:02:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:21.894 10:02:35 -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.894 10:02:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92575264 kB' 'MemUsed: 5040364 kB' 'SwapCached: 0 kB' 'Active: 2583724 kB' 'Inactive: 134256 kB' 'Active(anon): 2141412 kB' 'Inactive(anon): 0 kB' 'Active(file): 442312 kB' 'Inactive(file): 134256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2281328 kB' 'Mapped: 90184 kB' 'AnonPages: 439844 kB' 'Shmem: 1704760 kB' 'KernelStack: 12072 kB' 'PageTables: 5484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255208 kB' 'Slab: 543928 kB' 'SReclaimable: 255208 kB' 'SUnreclaim: 288720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.894 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.894 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@33 -- # echo 0 00:07:21.895 10:02:35 -- setup/common.sh@33 -- # return 0 00:07:21.895 10:02:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:21.895 10:02:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:21.895 10:02:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:21.895 10:02:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:21.895 10:02:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:21.895 10:02:35 -- setup/common.sh@18 -- # local node=1 00:07:21.895 10:02:35 -- setup/common.sh@19 -- # local var val 00:07:21.895 10:02:35 -- setup/common.sh@20 -- # local mem_f mem 00:07:21.895 10:02:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:21.895 10:02:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:21.895 10:02:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:21.895 10:02:35 -- setup/common.sh@28 -- # mapfile -t mem 00:07:21.895 10:02:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 78852128 kB' 'MemUsed: 14913424 kB' 'SwapCached: 0 kB' 'Active: 8043304 kB' 'Inactive: 3528876 kB' 'Active(anon): 7428476 kB' 'Inactive(anon): 0 kB' 'Active(file): 614828 kB' 'Inactive(file): 3528876 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11396024 kB' 'Mapped: 170156 kB' 'AnonPages: 176272 kB' 'Shmem: 7252320 kB' 'KernelStack: 8424 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 224504 kB' 'Slab: 548804 kB' 'SReclaimable: 224504 kB' 'SUnreclaim: 324300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # continue 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:21.895 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:21.895 10:02:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # continue 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # IFS=': ' 00:07:22.156 10:02:35 -- setup/common.sh@31 -- # read -r var val _ 00:07:22.156 10:02:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:22.156 10:02:35 -- setup/common.sh@33 -- # echo 0 00:07:22.156 10:02:35 -- setup/common.sh@33 -- # return 0 00:07:22.156 10:02:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:22.157 10:02:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:22.157 10:02:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:22.157 10:02:35 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:07:22.157 node0=512 expecting 513 00:07:22.157 10:02:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:22.157 10:02:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:22.157 10:02:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:22.157 10:02:35 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:07:22.157 node1=513 expecting 512 00:07:22.157 10:02:35 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:07:22.157 00:07:22.157 real 0m2.746s 00:07:22.157 user 0m1.077s 00:07:22.157 sys 0m1.701s 00:07:22.157 10:02:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.157 10:02:35 -- common/autotest_common.sh@10 -- # set +x 00:07:22.157 ************************************ 00:07:22.157 END TEST odd_alloc 00:07:22.157 ************************************ 00:07:22.157 10:02:35 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:22.157 10:02:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:22.157 10:02:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.157 10:02:35 -- common/autotest_common.sh@10 -- # set +x 00:07:22.157 ************************************ 00:07:22.157 START TEST custom_alloc 00:07:22.157 ************************************ 00:07:22.157 10:02:35 -- common/autotest_common.sh@1104 -- # custom_alloc 00:07:22.157 10:02:35 -- setup/hugepages.sh@167 -- # local IFS=, 00:07:22.157 10:02:35 -- setup/hugepages.sh@169 -- # local node 00:07:22.157 10:02:35 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:22.157 10:02:35 -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:22.157 10:02:35 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:22.157 10:02:35 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:22.157 10:02:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:07:22.157 10:02:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:22.157 10:02:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:22.157 10:02:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:22.157 10:02:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:22.157 10:02:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:22.157 10:02:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:22.157 10:02:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:22.157 10:02:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:22.157 10:02:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:07:22.157 10:02:35 -- setup/hugepages.sh@83 -- # : 256 00:07:22.157 10:02:35 -- setup/hugepages.sh@84 -- # : 1 00:07:22.157 10:02:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:07:22.157 10:02:35 -- setup/hugepages.sh@83 -- # : 0 00:07:22.157 10:02:35 -- setup/hugepages.sh@84 -- # : 0 00:07:22.157 10:02:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:22.157 10:02:35 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:07:22.157 10:02:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:22.157 10:02:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:22.157 10:02:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:22.157 10:02:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:22.157 10:02:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:22.157 10:02:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:22.157 10:02:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:22.157 10:02:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:22.157 10:02:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:22.157 10:02:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:22.157 10:02:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:22.157 10:02:35 -- setup/hugepages.sh@78 -- # return 0 00:07:22.157 10:02:35 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:07:22.157 10:02:35 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:22.157 10:02:35 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:22.157 10:02:35 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:22.157 10:02:35 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:22.157 10:02:35 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:22.157 10:02:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:22.157 10:02:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:22.157 10:02:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:22.157 10:02:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:22.157 10:02:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:22.157 10:02:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:22.157 10:02:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:07:22.157 10:02:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:22.157 10:02:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:22.157 10:02:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:22.157 10:02:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:07:22.157 10:02:35 -- setup/hugepages.sh@78 -- # return 0 00:07:22.157 10:02:35 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:07:22.157 10:02:35 -- setup/hugepages.sh@187 -- # setup output 00:07:22.157 10:02:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:22.157 10:02:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:24.695 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:24.695 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:24.695 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:24.695 10:02:37 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:07:24.695 10:02:37 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:24.695 10:02:37 -- setup/hugepages.sh@89 -- # local node 00:07:24.695 10:02:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:24.695 10:02:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:24.695 10:02:37 -- setup/hugepages.sh@92 -- # local surp 00:07:24.695 10:02:37 -- setup/hugepages.sh@93 -- # local resv 00:07:24.695 10:02:37 -- setup/hugepages.sh@94 -- # local anon 00:07:24.695 10:02:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:24.695 10:02:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:24.695 10:02:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:24.695 10:02:37 -- setup/common.sh@18 -- # local node= 00:07:24.695 10:02:37 -- setup/common.sh@19 -- # local var val 00:07:24.695 10:02:37 -- setup/common.sh@20 -- # local mem_f mem 00:07:24.695 10:02:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.695 10:02:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.695 10:02:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.695 10:02:37 -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.695 10:02:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170373124 kB' 'MemAvailable: 174189036 kB' 'Buffers: 3972 kB' 'Cached: 13673464 kB' 'SwapCached: 0 kB' 'Active: 10628412 kB' 'Inactive: 3663132 kB' 'Active(anon): 9571272 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617320 kB' 'Mapped: 260380 kB' 'Shmem: 8957164 kB' 'KReclaimable: 479712 kB' 'Slab: 1093060 kB' 'SReclaimable: 479712 kB' 'SUnreclaim: 613348 kB' 'KernelStack: 20656 kB' 'PageTables: 10448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11070504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315648 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.959 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.959 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:24.960 10:02:38 -- setup/common.sh@33 -- # echo 0 00:07:24.960 10:02:38 -- setup/common.sh@33 -- # return 0 00:07:24.960 10:02:38 -- setup/hugepages.sh@97 -- # anon=0 00:07:24.960 10:02:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:24.960 10:02:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:24.960 10:02:38 -- setup/common.sh@18 -- # local node= 00:07:24.960 10:02:38 -- setup/common.sh@19 -- # local var val 00:07:24.960 10:02:38 -- setup/common.sh@20 -- # local mem_f mem 00:07:24.960 10:02:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.960 10:02:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.960 10:02:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.960 10:02:38 -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.960 10:02:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.960 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.960 10:02:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170374148 kB' 'MemAvailable: 174190060 kB' 'Buffers: 3972 kB' 'Cached: 13673468 kB' 'SwapCached: 0 kB' 'Active: 10627864 kB' 'Inactive: 3663132 kB' 'Active(anon): 9570724 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616748 kB' 'Mapped: 260280 kB' 'Shmem: 8957168 kB' 'KReclaimable: 479712 kB' 'Slab: 1093060 kB' 'SReclaimable: 479712 kB' 'SUnreclaim: 613348 kB' 'KernelStack: 20608 kB' 'PageTables: 10308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11070516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315616 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.960 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.961 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.961 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.962 10:02:38 -- setup/common.sh@33 -- # echo 0 00:07:24.962 10:02:38 -- setup/common.sh@33 -- # return 0 00:07:24.962 10:02:38 -- setup/hugepages.sh@99 -- # surp=0 00:07:24.962 10:02:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:24.962 10:02:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:24.962 10:02:38 -- setup/common.sh@18 -- # local node= 00:07:24.962 10:02:38 -- setup/common.sh@19 -- # local var val 00:07:24.962 10:02:38 -- setup/common.sh@20 -- # local mem_f mem 00:07:24.962 10:02:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.962 10:02:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.962 10:02:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.962 10:02:38 -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.962 10:02:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170374852 kB' 'MemAvailable: 174190764 kB' 'Buffers: 3972 kB' 'Cached: 13673480 kB' 'SwapCached: 0 kB' 'Active: 10627500 kB' 'Inactive: 3663132 kB' 'Active(anon): 9570360 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616424 kB' 'Mapped: 260348 kB' 'Shmem: 8957180 kB' 'KReclaimable: 479712 kB' 'Slab: 1093076 kB' 'SReclaimable: 479712 kB' 'SUnreclaim: 613364 kB' 'KernelStack: 20496 kB' 'PageTables: 9704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11070532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.962 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.962 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.963 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.963 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:24.964 10:02:38 -- setup/common.sh@33 -- # echo 0 00:07:24.964 10:02:38 -- setup/common.sh@33 -- # return 0 00:07:24.964 10:02:38 -- setup/hugepages.sh@100 -- # resv=0 00:07:24.964 10:02:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:07:24.964 nr_hugepages=1536 00:07:24.964 10:02:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:24.964 resv_hugepages=0 00:07:24.964 10:02:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:24.964 surplus_hugepages=0 00:07:24.964 10:02:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:24.964 anon_hugepages=0 00:07:24.964 10:02:38 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:24.964 10:02:38 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:07:24.964 10:02:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:24.964 10:02:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:24.964 10:02:38 -- setup/common.sh@18 -- # local node= 00:07:24.964 10:02:38 -- setup/common.sh@19 -- # local var val 00:07:24.964 10:02:38 -- setup/common.sh@20 -- # local mem_f mem 00:07:24.964 10:02:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.964 10:02:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:24.964 10:02:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:24.964 10:02:38 -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.964 10:02:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170374892 kB' 'MemAvailable: 174190804 kB' 'Buffers: 3972 kB' 'Cached: 13673492 kB' 'SwapCached: 0 kB' 'Active: 10627532 kB' 'Inactive: 3663132 kB' 'Active(anon): 9570392 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616420 kB' 'Mapped: 260348 kB' 'Shmem: 8957192 kB' 'KReclaimable: 479712 kB' 'Slab: 1093076 kB' 'SReclaimable: 479712 kB' 'SUnreclaim: 613364 kB' 'KernelStack: 20496 kB' 'PageTables: 9704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11070544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.964 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.964 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.965 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.965 10:02:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:24.966 10:02:38 -- setup/common.sh@33 -- # echo 1536 00:07:24.966 10:02:38 -- setup/common.sh@33 -- # return 0 00:07:24.966 10:02:38 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:24.966 10:02:38 -- setup/hugepages.sh@112 -- # get_nodes 00:07:24.966 10:02:38 -- setup/hugepages.sh@27 -- # local node 00:07:24.966 10:02:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:24.966 10:02:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:24.966 10:02:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:24.966 10:02:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:24.966 10:02:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:24.966 10:02:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:24.966 10:02:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:24.966 10:02:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:24.966 10:02:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:24.966 10:02:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:24.966 10:02:38 -- setup/common.sh@18 -- # local node=0 00:07:24.966 10:02:38 -- setup/common.sh@19 -- # local var val 00:07:24.966 10:02:38 -- setup/common.sh@20 -- # local mem_f mem 00:07:24.966 10:02:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.966 10:02:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:24.966 10:02:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:24.966 10:02:38 -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.966 10:02:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.966 10:02:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92552888 kB' 'MemUsed: 5062740 kB' 'SwapCached: 0 kB' 'Active: 2584276 kB' 'Inactive: 134256 kB' 'Active(anon): 2141964 kB' 'Inactive(anon): 0 kB' 'Active(file): 442312 kB' 'Inactive(file): 134256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2281428 kB' 'Mapped: 90184 kB' 'AnonPages: 440292 kB' 'Shmem: 1704860 kB' 'KernelStack: 12072 kB' 'PageTables: 5588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255208 kB' 'Slab: 543816 kB' 'SReclaimable: 255208 kB' 'SUnreclaim: 288608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.966 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.966 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@33 -- # echo 0 00:07:24.967 10:02:38 -- setup/common.sh@33 -- # return 0 00:07:24.967 10:02:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:24.967 10:02:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:24.967 10:02:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:24.967 10:02:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:24.967 10:02:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:24.967 10:02:38 -- setup/common.sh@18 -- # local node=1 00:07:24.967 10:02:38 -- setup/common.sh@19 -- # local var val 00:07:24.967 10:02:38 -- setup/common.sh@20 -- # local mem_f mem 00:07:24.967 10:02:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:24.967 10:02:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:24.967 10:02:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:24.967 10:02:38 -- setup/common.sh@28 -- # mapfile -t mem 00:07:24.967 10:02:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 77822624 kB' 'MemUsed: 15942928 kB' 'SwapCached: 0 kB' 'Active: 8043728 kB' 'Inactive: 3528876 kB' 'Active(anon): 7428900 kB' 'Inactive(anon): 0 kB' 'Active(file): 614828 kB' 'Inactive(file): 3528876 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11396048 kB' 'Mapped: 170164 kB' 'AnonPages: 176716 kB' 'Shmem: 7252344 kB' 'KernelStack: 8456 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 224504 kB' 'Slab: 549264 kB' 'SReclaimable: 224504 kB' 'SUnreclaim: 324760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.967 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.967 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # continue 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # IFS=': ' 00:07:24.968 10:02:38 -- setup/common.sh@31 -- # read -r var val _ 00:07:24.968 10:02:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:24.968 10:02:38 -- setup/common.sh@33 -- # echo 0 00:07:24.968 10:02:38 -- setup/common.sh@33 -- # return 0 00:07:24.968 10:02:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:24.968 10:02:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:24.968 10:02:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:24.968 10:02:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:24.968 10:02:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:24.968 node0=512 expecting 512 00:07:24.968 10:02:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:24.968 10:02:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:24.968 10:02:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:24.968 10:02:38 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:07:24.968 node1=1024 expecting 1024 00:07:24.968 10:02:38 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:07:24.968 00:07:24.968 real 0m2.932s 00:07:24.968 user 0m1.197s 00:07:24.968 sys 0m1.799s 00:07:24.968 10:02:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.968 10:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:24.968 ************************************ 00:07:24.968 END TEST custom_alloc 00:07:24.968 ************************************ 00:07:24.968 10:02:38 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:24.968 10:02:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:24.968 10:02:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.968 10:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:24.968 ************************************ 00:07:24.968 START TEST no_shrink_alloc 00:07:24.969 ************************************ 00:07:24.969 10:02:38 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:07:24.969 10:02:38 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:24.969 10:02:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:24.969 10:02:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:24.969 10:02:38 -- setup/hugepages.sh@51 -- # shift 00:07:24.969 10:02:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:24.969 10:02:38 -- setup/hugepages.sh@52 -- # local node_ids 00:07:24.969 10:02:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:24.969 10:02:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:24.969 10:02:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:24.969 10:02:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:24.969 10:02:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:24.969 10:02:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:24.969 10:02:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:24.969 10:02:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:24.969 10:02:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:24.969 10:02:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:24.969 10:02:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:24.969 10:02:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:24.969 10:02:38 -- setup/hugepages.sh@73 -- # return 0 00:07:24.969 10:02:38 -- setup/hugepages.sh@198 -- # setup output 00:07:24.969 10:02:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:24.969 10:02:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:27.505 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:27.505 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:27.505 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:27.505 10:02:40 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:27.505 10:02:40 -- setup/hugepages.sh@89 -- # local node 00:07:27.505 10:02:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:27.506 10:02:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:27.506 10:02:40 -- setup/hugepages.sh@92 -- # local surp 00:07:27.506 10:02:40 -- setup/hugepages.sh@93 -- # local resv 00:07:27.506 10:02:40 -- setup/hugepages.sh@94 -- # local anon 00:07:27.506 10:02:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:27.506 10:02:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:27.506 10:02:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:27.506 10:02:40 -- setup/common.sh@18 -- # local node= 00:07:27.506 10:02:40 -- setup/common.sh@19 -- # local var val 00:07:27.506 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:07:27.506 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.506 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:27.506 10:02:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:27.506 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.506 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171466056 kB' 'MemAvailable: 175281944 kB' 'Buffers: 3972 kB' 'Cached: 13673588 kB' 'SwapCached: 0 kB' 'Active: 10626920 kB' 'Inactive: 3663132 kB' 'Active(anon): 9569780 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615856 kB' 'Mapped: 260396 kB' 'Shmem: 8957288 kB' 'KReclaimable: 479664 kB' 'Slab: 1092908 kB' 'SReclaimable: 479664 kB' 'SUnreclaim: 613244 kB' 'KernelStack: 20480 kB' 'PageTables: 9596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11070336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315568 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.506 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.506 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:27.507 10:02:40 -- setup/common.sh@33 -- # echo 0 00:07:27.507 10:02:40 -- setup/common.sh@33 -- # return 0 00:07:27.507 10:02:40 -- setup/hugepages.sh@97 -- # anon=0 00:07:27.507 10:02:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:27.507 10:02:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:27.507 10:02:40 -- setup/common.sh@18 -- # local node= 00:07:27.507 10:02:40 -- setup/common.sh@19 -- # local var val 00:07:27.507 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:07:27.507 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.507 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:27.507 10:02:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:27.507 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.507 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.507 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171466460 kB' 'MemAvailable: 175282348 kB' 'Buffers: 3972 kB' 'Cached: 13673596 kB' 'SwapCached: 0 kB' 'Active: 10626744 kB' 'Inactive: 3663132 kB' 'Active(anon): 9569604 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615696 kB' 'Mapped: 260356 kB' 'Shmem: 8957296 kB' 'KReclaimable: 479664 kB' 'Slab: 1092860 kB' 'SReclaimable: 479664 kB' 'SUnreclaim: 613196 kB' 'KernelStack: 20496 kB' 'PageTables: 9680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11082352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315552 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.507 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.507 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.508 10:02:40 -- setup/common.sh@33 -- # echo 0 00:07:27.508 10:02:40 -- setup/common.sh@33 -- # return 0 00:07:27.508 10:02:40 -- setup/hugepages.sh@99 -- # surp=0 00:07:27.508 10:02:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:27.508 10:02:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:27.508 10:02:40 -- setup/common.sh@18 -- # local node= 00:07:27.508 10:02:40 -- setup/common.sh@19 -- # local var val 00:07:27.508 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:07:27.508 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.508 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:27.508 10:02:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:27.508 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.508 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171467384 kB' 'MemAvailable: 175283272 kB' 'Buffers: 3972 kB' 'Cached: 13673596 kB' 'SwapCached: 0 kB' 'Active: 10629664 kB' 'Inactive: 3663132 kB' 'Active(anon): 9572524 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618616 kB' 'Mapped: 260860 kB' 'Shmem: 8957296 kB' 'KReclaimable: 479664 kB' 'Slab: 1092908 kB' 'SReclaimable: 479664 kB' 'SUnreclaim: 613244 kB' 'KernelStack: 20496 kB' 'PageTables: 9684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11074860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315520 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.508 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.508 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.509 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.509 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:27.510 10:02:40 -- setup/common.sh@33 -- # echo 0 00:07:27.510 10:02:40 -- setup/common.sh@33 -- # return 0 00:07:27.510 10:02:40 -- setup/hugepages.sh@100 -- # resv=0 00:07:27.510 10:02:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:27.510 nr_hugepages=1024 00:07:27.510 10:02:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:27.510 resv_hugepages=0 00:07:27.510 10:02:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:27.510 surplus_hugepages=0 00:07:27.510 10:02:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:27.510 anon_hugepages=0 00:07:27.510 10:02:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:27.510 10:02:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:27.510 10:02:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:27.510 10:02:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:27.510 10:02:40 -- setup/common.sh@18 -- # local node= 00:07:27.510 10:02:40 -- setup/common.sh@19 -- # local var val 00:07:27.510 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:07:27.510 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.510 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:27.510 10:02:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:27.510 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.510 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171467452 kB' 'MemAvailable: 175283340 kB' 'Buffers: 3972 kB' 'Cached: 13673620 kB' 'SwapCached: 0 kB' 'Active: 10626688 kB' 'Inactive: 3663132 kB' 'Active(anon): 9569548 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615620 kB' 'Mapped: 260704 kB' 'Shmem: 8957320 kB' 'KReclaimable: 479664 kB' 'Slab: 1092908 kB' 'SReclaimable: 479664 kB' 'SUnreclaim: 613244 kB' 'KernelStack: 20496 kB' 'PageTables: 9712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11070880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315552 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.510 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.510 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:27.511 10:02:40 -- setup/common.sh@33 -- # echo 1024 00:07:27.511 10:02:40 -- setup/common.sh@33 -- # return 0 00:07:27.511 10:02:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:27.511 10:02:40 -- setup/hugepages.sh@112 -- # get_nodes 00:07:27.511 10:02:40 -- setup/hugepages.sh@27 -- # local node 00:07:27.511 10:02:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:27.511 10:02:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:27.511 10:02:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:27.511 10:02:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:27.511 10:02:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:27.511 10:02:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:27.511 10:02:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:27.511 10:02:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:27.511 10:02:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:27.511 10:02:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:27.511 10:02:40 -- setup/common.sh@18 -- # local node=0 00:07:27.511 10:02:40 -- setup/common.sh@19 -- # local var val 00:07:27.511 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:07:27.511 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:27.511 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:27.511 10:02:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:27.511 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:07:27.511 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91538812 kB' 'MemUsed: 6076816 kB' 'SwapCached: 0 kB' 'Active: 2583940 kB' 'Inactive: 134256 kB' 'Active(anon): 2141628 kB' 'Inactive(anon): 0 kB' 'Active(file): 442312 kB' 'Inactive(file): 134256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2281512 kB' 'Mapped: 90184 kB' 'AnonPages: 439980 kB' 'Shmem: 1704944 kB' 'KernelStack: 12024 kB' 'PageTables: 5544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255208 kB' 'Slab: 543908 kB' 'SReclaimable: 255208 kB' 'SUnreclaim: 288700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.511 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.511 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # continue 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:07:27.512 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:07:27.512 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:27.512 10:02:40 -- setup/common.sh@33 -- # echo 0 00:07:27.512 10:02:40 -- setup/common.sh@33 -- # return 0 00:07:27.512 10:02:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:27.512 10:02:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:27.512 10:02:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:27.512 10:02:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:27.512 10:02:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:27.512 node0=1024 expecting 1024 00:07:27.512 10:02:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:27.512 10:02:40 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:27.512 10:02:40 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:27.512 10:02:40 -- setup/hugepages.sh@202 -- # setup output 00:07:27.512 10:02:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:27.512 10:02:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:30.807 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:30.807 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:30.807 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:30.807 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:30.807 10:02:43 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:30.807 10:02:43 -- setup/hugepages.sh@89 -- # local node 00:07:30.807 10:02:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:30.807 10:02:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:30.807 10:02:43 -- setup/hugepages.sh@92 -- # local surp 00:07:30.807 10:02:43 -- setup/hugepages.sh@93 -- # local resv 00:07:30.807 10:02:43 -- setup/hugepages.sh@94 -- # local anon 00:07:30.807 10:02:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:30.807 10:02:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:30.807 10:02:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:30.807 10:02:43 -- setup/common.sh@18 -- # local node= 00:07:30.807 10:02:43 -- setup/common.sh@19 -- # local var val 00:07:30.807 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:07:30.807 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:30.807 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:30.807 10:02:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:30.807 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:07:30.807 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171453660 kB' 'MemAvailable: 175269548 kB' 'Buffers: 3972 kB' 'Cached: 13673696 kB' 'SwapCached: 0 kB' 'Active: 10628404 kB' 'Inactive: 3663132 kB' 'Active(anon): 9571264 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616808 kB' 'Mapped: 260496 kB' 'Shmem: 8957396 kB' 'KReclaimable: 479664 kB' 'Slab: 1093148 kB' 'SReclaimable: 479664 kB' 'SUnreclaim: 613484 kB' 'KernelStack: 20480 kB' 'PageTables: 9640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11070896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315536 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.807 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.807 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:30.808 10:02:43 -- setup/common.sh@33 -- # echo 0 00:07:30.808 10:02:43 -- setup/common.sh@33 -- # return 0 00:07:30.808 10:02:43 -- setup/hugepages.sh@97 -- # anon=0 00:07:30.808 10:02:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:30.808 10:02:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:30.808 10:02:43 -- setup/common.sh@18 -- # local node= 00:07:30.808 10:02:43 -- setup/common.sh@19 -- # local var val 00:07:30.808 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:07:30.808 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:30.808 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:30.808 10:02:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:30.808 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:07:30.808 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171454140 kB' 'MemAvailable: 175270028 kB' 'Buffers: 3972 kB' 'Cached: 13673700 kB' 'SwapCached: 0 kB' 'Active: 10627728 kB' 'Inactive: 3663132 kB' 'Active(anon): 9570588 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616572 kB' 'Mapped: 260352 kB' 'Shmem: 8957400 kB' 'KReclaimable: 479664 kB' 'Slab: 1093180 kB' 'SReclaimable: 479664 kB' 'SUnreclaim: 613516 kB' 'KernelStack: 20496 kB' 'PageTables: 9704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11070908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315504 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.808 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.808 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.809 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.809 10:02:43 -- setup/common.sh@33 -- # echo 0 00:07:30.809 10:02:43 -- setup/common.sh@33 -- # return 0 00:07:30.809 10:02:43 -- setup/hugepages.sh@99 -- # surp=0 00:07:30.809 10:02:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:30.809 10:02:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:30.809 10:02:43 -- setup/common.sh@18 -- # local node= 00:07:30.809 10:02:43 -- setup/common.sh@19 -- # local var val 00:07:30.809 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:07:30.809 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:30.809 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:30.809 10:02:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:30.809 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:07:30.809 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.809 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171454644 kB' 'MemAvailable: 175270532 kB' 'Buffers: 3972 kB' 'Cached: 13673700 kB' 'SwapCached: 0 kB' 'Active: 10627728 kB' 'Inactive: 3663132 kB' 'Active(anon): 9570588 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616608 kB' 'Mapped: 260352 kB' 'Shmem: 8957400 kB' 'KReclaimable: 479664 kB' 'Slab: 1093180 kB' 'SReclaimable: 479664 kB' 'SUnreclaim: 613516 kB' 'KernelStack: 20512 kB' 'PageTables: 9752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11070924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315504 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.810 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.810 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:30.811 10:02:43 -- setup/common.sh@33 -- # echo 0 00:07:30.811 10:02:43 -- setup/common.sh@33 -- # return 0 00:07:30.811 10:02:43 -- setup/hugepages.sh@100 -- # resv=0 00:07:30.811 10:02:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:30.811 nr_hugepages=1024 00:07:30.811 10:02:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:30.811 resv_hugepages=0 00:07:30.811 10:02:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:30.811 surplus_hugepages=0 00:07:30.811 10:02:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:30.811 anon_hugepages=0 00:07:30.811 10:02:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:30.811 10:02:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:30.811 10:02:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:30.811 10:02:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:30.811 10:02:43 -- setup/common.sh@18 -- # local node= 00:07:30.811 10:02:43 -- setup/common.sh@19 -- # local var val 00:07:30.811 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:07:30.811 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:30.811 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:30.811 10:02:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:30.811 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:07:30.811 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171454800 kB' 'MemAvailable: 175270688 kB' 'Buffers: 3972 kB' 'Cached: 13673704 kB' 'SwapCached: 0 kB' 'Active: 10627936 kB' 'Inactive: 3663132 kB' 'Active(anon): 9570796 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057140 kB' 'Inactive(file): 3663132 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616808 kB' 'Mapped: 260352 kB' 'Shmem: 8957404 kB' 'KReclaimable: 479664 kB' 'Slab: 1093180 kB' 'SReclaimable: 479664 kB' 'SUnreclaim: 613516 kB' 'KernelStack: 20512 kB' 'PageTables: 9752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11070940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315504 kB' 'VmallocChunk: 0 kB' 'Percpu: 91392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134420 kB' 'DirectMap2M: 25905152 kB' 'DirectMap1G: 173015040 kB' 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.811 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.811 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.812 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.812 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:30.812 10:02:43 -- setup/common.sh@33 -- # echo 1024 00:07:30.812 10:02:43 -- setup/common.sh@33 -- # return 0 00:07:30.812 10:02:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:30.812 10:02:43 -- setup/hugepages.sh@112 -- # get_nodes 00:07:30.812 10:02:43 -- setup/hugepages.sh@27 -- # local node 00:07:30.812 10:02:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:30.812 10:02:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:30.812 10:02:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:30.812 10:02:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:30.812 10:02:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:30.812 10:02:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:30.813 10:02:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:30.813 10:02:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:30.813 10:02:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:30.813 10:02:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:30.813 10:02:43 -- setup/common.sh@18 -- # local node=0 00:07:30.813 10:02:43 -- setup/common.sh@19 -- # local var val 00:07:30.813 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:07:30.813 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:30.813 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:30.813 10:02:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:30.813 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:07:30.813 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91512240 kB' 'MemUsed: 6103388 kB' 'SwapCached: 0 kB' 'Active: 2584224 kB' 'Inactive: 134256 kB' 'Active(anon): 2141912 kB' 'Inactive(anon): 0 kB' 'Active(file): 442312 kB' 'Inactive(file): 134256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2281524 kB' 'Mapped: 90184 kB' 'AnonPages: 440172 kB' 'Shmem: 1704956 kB' 'KernelStack: 12008 kB' 'PageTables: 5500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255208 kB' 'Slab: 543952 kB' 'SReclaimable: 255208 kB' 'SUnreclaim: 288744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.813 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.813 10:02:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.814 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.814 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.814 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.814 10:02:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.814 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.814 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.814 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.814 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.814 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.814 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.814 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.814 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.814 10:02:43 -- setup/common.sh@32 -- # continue 00:07:30.814 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:07:30.814 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:07:30.814 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:30.814 10:02:43 -- setup/common.sh@33 -- # echo 0 00:07:30.814 10:02:43 -- setup/common.sh@33 -- # return 0 00:07:30.814 10:02:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:30.814 10:02:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:30.814 10:02:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:30.814 10:02:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:30.814 10:02:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:30.814 node0=1024 expecting 1024 00:07:30.814 10:02:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:30.814 00:07:30.814 real 0m5.433s 00:07:30.814 user 0m2.076s 00:07:30.814 sys 0m3.412s 00:07:30.814 10:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.814 10:02:43 -- common/autotest_common.sh@10 -- # set +x 00:07:30.814 ************************************ 00:07:30.814 END TEST no_shrink_alloc 00:07:30.814 ************************************ 00:07:30.814 10:02:43 -- setup/hugepages.sh@217 -- # clear_hp 00:07:30.814 10:02:43 -- setup/hugepages.sh@37 -- # local node hp 00:07:30.814 10:02:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:30.814 10:02:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:30.814 10:02:43 -- setup/hugepages.sh@41 -- # echo 0 00:07:30.814 10:02:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:30.814 10:02:43 -- setup/hugepages.sh@41 -- # echo 0 00:07:30.814 10:02:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:30.814 10:02:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:30.814 10:02:43 -- setup/hugepages.sh@41 -- # echo 0 00:07:30.814 10:02:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:30.814 10:02:43 -- setup/hugepages.sh@41 -- # echo 0 00:07:30.814 10:02:43 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:30.814 10:02:43 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:30.814 00:07:30.814 real 0m21.006s 00:07:30.814 user 0m8.022s 00:07:30.814 sys 0m12.522s 00:07:30.814 10:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.814 10:02:43 -- common/autotest_common.sh@10 -- # set +x 00:07:30.814 ************************************ 00:07:30.814 END TEST hugepages 00:07:30.814 ************************************ 00:07:30.814 10:02:43 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:30.814 10:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:30.814 10:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.814 10:02:43 -- common/autotest_common.sh@10 -- # set +x 00:07:30.814 ************************************ 00:07:30.814 START TEST driver 00:07:30.814 ************************************ 00:07:30.814 10:02:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:30.814 * Looking for test storage... 00:07:30.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:30.814 10:02:43 -- setup/driver.sh@68 -- # setup reset 00:07:30.814 10:02:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:30.814 10:02:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:35.002 10:02:47 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:35.002 10:02:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:35.002 10:02:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.002 10:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:35.002 ************************************ 00:07:35.002 START TEST guess_driver 00:07:35.002 ************************************ 00:07:35.002 10:02:47 -- common/autotest_common.sh@1104 -- # guess_driver 00:07:35.002 10:02:47 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:35.002 10:02:47 -- setup/driver.sh@47 -- # local fail=0 00:07:35.002 10:02:47 -- setup/driver.sh@49 -- # pick_driver 00:07:35.002 10:02:47 -- setup/driver.sh@36 -- # vfio 00:07:35.002 10:02:47 -- setup/driver.sh@21 -- # local iommu_grups 00:07:35.002 10:02:47 -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:35.002 10:02:47 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:35.002 10:02:47 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:35.002 10:02:47 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:35.002 10:02:47 -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:07:35.002 10:02:47 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:07:35.002 10:02:47 -- setup/driver.sh@14 -- # mod vfio_pci 00:07:35.002 10:02:47 -- setup/driver.sh@12 -- # dep vfio_pci 00:07:35.002 10:02:47 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:07:35.002 10:02:47 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:07:35.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:35.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:35.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:35.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:35.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:07:35.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:07:35.002 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:07:35.002 10:02:47 -- setup/driver.sh@30 -- # return 0 00:07:35.002 10:02:47 -- setup/driver.sh@37 -- # echo vfio-pci 00:07:35.002 10:02:47 -- setup/driver.sh@49 -- # driver=vfio-pci 00:07:35.002 10:02:47 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:35.003 10:02:47 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:07:35.003 Looking for driver=vfio-pci 00:07:35.003 10:02:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:35.003 10:02:47 -- setup/driver.sh@45 -- # setup output config 00:07:35.003 10:02:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:35.003 10:02:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:37.560 10:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:37.560 10:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:37.560 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:38.127 10:02:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:38.127 10:02:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:38.127 10:02:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:38.127 10:02:51 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:38.127 10:02:51 -- setup/driver.sh@65 -- # setup reset 00:07:38.127 10:02:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:38.127 10:02:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:42.314 00:07:42.314 real 0m7.377s 00:07:42.314 user 0m2.094s 00:07:42.314 sys 0m3.729s 00:07:42.314 10:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.314 10:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:42.314 ************************************ 00:07:42.314 END TEST guess_driver 00:07:42.314 ************************************ 00:07:42.314 00:07:42.314 real 0m11.381s 00:07:42.314 user 0m3.278s 00:07:42.314 sys 0m5.801s 00:07:42.314 10:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.314 10:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:42.314 ************************************ 00:07:42.314 END TEST driver 00:07:42.314 ************************************ 00:07:42.314 10:02:55 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:42.314 10:02:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.314 10:02:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.314 10:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:42.314 ************************************ 00:07:42.314 START TEST devices 00:07:42.314 ************************************ 00:07:42.314 10:02:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:42.314 * Looking for test storage... 00:07:42.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:42.314 10:02:55 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:42.314 10:02:55 -- setup/devices.sh@192 -- # setup reset 00:07:42.314 10:02:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:42.314 10:02:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:45.602 10:02:58 -- setup/devices.sh@194 -- # get_zoned_devs 00:07:45.602 10:02:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:07:45.602 10:02:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:07:45.602 10:02:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:07:45.602 10:02:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:07:45.602 10:02:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:07:45.602 10:02:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:07:45.602 10:02:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:45.602 10:02:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:07:45.602 10:02:58 -- setup/devices.sh@196 -- # blocks=() 00:07:45.602 10:02:58 -- setup/devices.sh@196 -- # declare -a blocks 00:07:45.602 10:02:58 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:45.602 10:02:58 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:45.602 10:02:58 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:45.602 10:02:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:45.602 10:02:58 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:45.602 10:02:58 -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:45.602 10:02:58 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:07:45.602 10:02:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:07:45.602 10:02:58 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:45.602 10:02:58 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:07:45.602 10:02:58 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:45.602 No valid GPT data, bailing 00:07:45.602 10:02:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:45.602 10:02:58 -- scripts/common.sh@393 -- # pt= 00:07:45.602 10:02:58 -- scripts/common.sh@394 -- # return 1 00:07:45.602 10:02:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:45.602 10:02:58 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:45.602 10:02:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:45.602 10:02:58 -- setup/common.sh@80 -- # echo 1000204886016 00:07:45.602 10:02:58 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:07:45.602 10:02:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:45.602 10:02:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:07:45.602 10:02:58 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:45.602 10:02:58 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:45.602 10:02:58 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:45.602 10:02:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.602 10:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.602 10:02:58 -- common/autotest_common.sh@10 -- # set +x 00:07:45.602 ************************************ 00:07:45.602 START TEST nvme_mount 00:07:45.602 ************************************ 00:07:45.602 10:02:58 -- common/autotest_common.sh@1104 -- # nvme_mount 00:07:45.602 10:02:58 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:45.602 10:02:58 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:45.602 10:02:58 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:45.602 10:02:58 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:45.602 10:02:58 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:45.602 10:02:58 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:45.602 10:02:58 -- setup/common.sh@40 -- # local part_no=1 00:07:45.602 10:02:58 -- setup/common.sh@41 -- # local size=1073741824 00:07:45.602 10:02:58 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:45.602 10:02:58 -- setup/common.sh@44 -- # parts=() 00:07:45.602 10:02:58 -- setup/common.sh@44 -- # local parts 00:07:45.602 10:02:58 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:45.602 10:02:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:45.602 10:02:58 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:45.602 10:02:58 -- setup/common.sh@46 -- # (( part++ )) 00:07:45.602 10:02:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:45.602 10:02:58 -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:45.602 10:02:58 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:45.602 10:02:58 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:46.169 Creating new GPT entries in memory. 00:07:46.169 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:46.169 other utilities. 00:07:46.169 10:02:59 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:46.169 10:02:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:46.169 10:02:59 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:46.169 10:02:59 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:46.169 10:02:59 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:47.543 Creating new GPT entries in memory. 00:07:47.543 The operation has completed successfully. 00:07:47.543 10:03:00 -- setup/common.sh@57 -- # (( part++ )) 00:07:47.543 10:03:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:47.543 10:03:00 -- setup/common.sh@62 -- # wait 127090 00:07:47.543 10:03:00 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.543 10:03:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:47.543 10:03:00 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.543 10:03:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:47.543 10:03:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:47.543 10:03:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.543 10:03:00 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:47.543 10:03:00 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:47.543 10:03:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:47.543 10:03:00 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:47.543 10:03:00 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:47.543 10:03:00 -- setup/devices.sh@53 -- # local found=0 00:07:47.543 10:03:00 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:47.543 10:03:00 -- setup/devices.sh@56 -- # : 00:07:47.543 10:03:00 -- setup/devices.sh@59 -- # local pci status 00:07:47.543 10:03:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.543 10:03:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:47.543 10:03:00 -- setup/devices.sh@47 -- # setup output config 00:07:47.543 10:03:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:47.543 10:03:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:50.072 10:03:03 -- setup/devices.sh@63 -- # found=1 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.072 10:03:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:50.072 10:03:03 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:50.072 10:03:03 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.072 10:03:03 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:50.072 10:03:03 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:50.072 10:03:03 -- setup/devices.sh@110 -- # cleanup_nvme 00:07:50.072 10:03:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.072 10:03:03 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.072 10:03:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:50.072 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:50.072 10:03:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:50.072 10:03:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:50.330 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:50.330 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:50.330 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:50.330 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:50.330 10:03:03 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:50.330 10:03:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:50.330 10:03:03 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.330 10:03:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:50.330 10:03:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:50.330 10:03:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.588 10:03:03 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:50.588 10:03:03 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:50.588 10:03:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:50.588 10:03:03 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:50.588 10:03:03 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:50.588 10:03:03 -- setup/devices.sh@53 -- # local found=0 00:07:50.588 10:03:03 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:50.588 10:03:03 -- setup/devices.sh@56 -- # : 00:07:50.588 10:03:03 -- setup/devices.sh@59 -- # local pci status 00:07:50.588 10:03:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:50.588 10:03:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:50.588 10:03:03 -- setup/devices.sh@47 -- # setup output config 00:07:50.588 10:03:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:50.588 10:03:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:53.118 10:03:05 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:53.118 10:03:05 -- setup/devices.sh@63 -- # found=1 00:07:53.118 10:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:53.118 10:03:06 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:53.118 10:03:06 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:53.118 10:03:06 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:53.118 10:03:06 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:53.118 10:03:06 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:53.118 10:03:06 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:07:53.118 10:03:06 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:53.118 10:03:06 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:53.118 10:03:06 -- setup/devices.sh@50 -- # local mount_point= 00:07:53.118 10:03:06 -- setup/devices.sh@51 -- # local test_file= 00:07:53.118 10:03:06 -- setup/devices.sh@53 -- # local found=0 00:07:53.118 10:03:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:53.118 10:03:06 -- setup/devices.sh@59 -- # local pci status 00:07:53.118 10:03:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:53.118 10:03:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:53.118 10:03:06 -- setup/devices.sh@47 -- # setup output config 00:07:53.118 10:03:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:53.118 10:03:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:55.652 10:03:08 -- setup/devices.sh@63 -- # found=1 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.652 10:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:07:55.652 10:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:55.912 10:03:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:55.912 10:03:09 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:55.912 10:03:09 -- setup/devices.sh@68 -- # return 0 00:07:55.912 10:03:09 -- setup/devices.sh@128 -- # cleanup_nvme 00:07:55.912 10:03:09 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:55.913 10:03:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:55.913 10:03:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:55.913 10:03:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:55.913 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:55.913 00:07:55.913 real 0m10.671s 00:07:55.913 user 0m3.153s 00:07:55.913 sys 0m5.313s 00:07:55.913 10:03:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.913 10:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:55.913 ************************************ 00:07:55.913 END TEST nvme_mount 00:07:55.913 ************************************ 00:07:55.913 10:03:09 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:55.913 10:03:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:55.913 10:03:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.913 10:03:09 -- common/autotest_common.sh@10 -- # set +x 00:07:55.913 ************************************ 00:07:55.913 START TEST dm_mount 00:07:55.913 ************************************ 00:07:55.913 10:03:09 -- common/autotest_common.sh@1104 -- # dm_mount 00:07:55.913 10:03:09 -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:55.913 10:03:09 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:55.913 10:03:09 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:55.913 10:03:09 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:55.913 10:03:09 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:55.913 10:03:09 -- setup/common.sh@40 -- # local part_no=2 00:07:55.913 10:03:09 -- setup/common.sh@41 -- # local size=1073741824 00:07:55.913 10:03:09 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:55.913 10:03:09 -- setup/common.sh@44 -- # parts=() 00:07:55.913 10:03:09 -- setup/common.sh@44 -- # local parts 00:07:55.913 10:03:09 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:55.913 10:03:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:55.913 10:03:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:55.913 10:03:09 -- setup/common.sh@46 -- # (( part++ )) 00:07:55.913 10:03:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:55.913 10:03:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:55.913 10:03:09 -- setup/common.sh@46 -- # (( part++ )) 00:07:55.913 10:03:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:55.913 10:03:09 -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:55.913 10:03:09 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:55.913 10:03:09 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:56.880 Creating new GPT entries in memory. 00:07:56.880 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:56.880 other utilities. 00:07:56.880 10:03:10 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:56.880 10:03:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:56.880 10:03:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:56.880 10:03:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:56.880 10:03:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:58.255 Creating new GPT entries in memory. 00:07:58.255 The operation has completed successfully. 00:07:58.255 10:03:11 -- setup/common.sh@57 -- # (( part++ )) 00:07:58.255 10:03:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:58.255 10:03:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:58.255 10:03:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:58.255 10:03:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:07:59.191 The operation has completed successfully. 00:07:59.191 10:03:12 -- setup/common.sh@57 -- # (( part++ )) 00:07:59.191 10:03:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:59.191 10:03:12 -- setup/common.sh@62 -- # wait 131564 00:07:59.191 10:03:12 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:59.191 10:03:12 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:59.191 10:03:12 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:59.191 10:03:12 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:59.191 10:03:12 -- setup/devices.sh@160 -- # for t in {1..5} 00:07:59.191 10:03:12 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:59.191 10:03:12 -- setup/devices.sh@161 -- # break 00:07:59.191 10:03:12 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:59.191 10:03:12 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:59.191 10:03:12 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:07:59.191 10:03:12 -- setup/devices.sh@166 -- # dm=dm-2 00:07:59.191 10:03:12 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:07:59.191 10:03:12 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:07:59.191 10:03:12 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:59.191 10:03:12 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:07:59.191 10:03:12 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:59.191 10:03:12 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:59.191 10:03:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:59.191 10:03:12 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:59.191 10:03:12 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:59.191 10:03:12 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:07:59.191 10:03:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:59.191 10:03:12 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:59.191 10:03:12 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:59.191 10:03:12 -- setup/devices.sh@53 -- # local found=0 00:07:59.191 10:03:12 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:59.191 10:03:12 -- setup/devices.sh@56 -- # : 00:07:59.191 10:03:12 -- setup/devices.sh@59 -- # local pci status 00:07:59.191 10:03:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:59.191 10:03:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:07:59.191 10:03:12 -- setup/devices.sh@47 -- # setup output config 00:07:59.191 10:03:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:59.191 10:03:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:01.721 10:03:14 -- setup/devices.sh@63 -- # found=1 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:01.721 10:03:14 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:08:01.721 10:03:14 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:01.721 10:03:14 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:08:01.721 10:03:14 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:08:01.721 10:03:14 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:01.721 10:03:14 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:08:01.721 10:03:14 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:08:01.721 10:03:14 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:08:01.721 10:03:14 -- setup/devices.sh@50 -- # local mount_point= 00:08:01.721 10:03:14 -- setup/devices.sh@51 -- # local test_file= 00:08:01.721 10:03:14 -- setup/devices.sh@53 -- # local found=0 00:08:01.721 10:03:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:01.721 10:03:14 -- setup/devices.sh@59 -- # local pci status 00:08:01.721 10:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:01.721 10:03:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:08:01.721 10:03:14 -- setup/devices.sh@47 -- # setup output config 00:08:01.721 10:03:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:01.721 10:03:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:08:04.250 10:03:17 -- setup/devices.sh@63 -- # found=1 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.250 10:03:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:08:04.250 10:03:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:04.510 10:03:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:04.510 10:03:17 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:04.510 10:03:17 -- setup/devices.sh@68 -- # return 0 00:08:04.510 10:03:17 -- setup/devices.sh@187 -- # cleanup_dm 00:08:04.510 10:03:17 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:04.510 10:03:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:04.510 10:03:17 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:04.510 10:03:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:04.510 10:03:17 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:04.510 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:04.510 10:03:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:04.510 10:03:17 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:04.510 00:08:04.510 real 0m8.504s 00:08:04.510 user 0m2.080s 00:08:04.510 sys 0m3.430s 00:08:04.510 10:03:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.510 10:03:17 -- common/autotest_common.sh@10 -- # set +x 00:08:04.510 ************************************ 00:08:04.510 END TEST dm_mount 00:08:04.510 ************************************ 00:08:04.510 10:03:17 -- setup/devices.sh@1 -- # cleanup 00:08:04.510 10:03:17 -- setup/devices.sh@11 -- # cleanup_nvme 00:08:04.510 10:03:17 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:08:04.510 10:03:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:04.510 10:03:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:04.510 10:03:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:04.510 10:03:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:04.769 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:08:04.769 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:08:04.769 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:04.769 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:04.769 10:03:17 -- setup/devices.sh@12 -- # cleanup_dm 00:08:04.769 10:03:17 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:08:04.769 10:03:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:04.769 10:03:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:04.769 10:03:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:04.769 10:03:17 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:04.769 10:03:17 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:04.769 00:08:04.769 real 0m22.819s 00:08:04.769 user 0m6.517s 00:08:04.769 sys 0m10.979s 00:08:04.769 10:03:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.769 10:03:17 -- common/autotest_common.sh@10 -- # set +x 00:08:04.769 ************************************ 00:08:04.769 END TEST devices 00:08:04.769 ************************************ 00:08:04.769 00:08:04.769 real 1m14.345s 00:08:04.769 user 0m24.322s 00:08:04.769 sys 0m40.581s 00:08:04.769 10:03:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.769 10:03:17 -- common/autotest_common.sh@10 -- # set +x 00:08:04.769 ************************************ 00:08:04.769 END TEST setup.sh 00:08:04.769 ************************************ 00:08:04.769 10:03:17 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:07.299 Hugepages 00:08:07.300 node hugesize free / total 00:08:07.300 node0 1048576kB 0 / 0 00:08:07.300 node0 2048kB 2048 / 2048 00:08:07.300 node1 1048576kB 0 / 0 00:08:07.300 node1 2048kB 0 / 0 00:08:07.300 00:08:07.300 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:07.300 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:08:07.300 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:08:07.300 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:08:07.300 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:08:07.300 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:08:07.300 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:08:07.300 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:08:07.300 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:08:07.300 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:08:07.300 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:08:07.300 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:08:07.300 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:08:07.300 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:08:07.300 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:08:07.300 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:08:07.300 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:08:07.300 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:08:07.300 10:03:20 -- spdk/autotest.sh@141 -- # uname -s 00:08:07.300 10:03:20 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:08:07.300 10:03:20 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:08:07.300 10:03:20 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:09.829 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:09.829 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:09.829 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:09.829 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:10.088 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:11.025 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:11.025 10:03:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:11.962 10:03:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:11.962 10:03:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:11.962 10:03:25 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:08:11.962 10:03:25 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:08:11.962 10:03:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:11.962 10:03:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:11.962 10:03:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:11.962 10:03:25 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:11.962 10:03:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:11.962 10:03:25 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:11.962 10:03:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:08:11.962 10:03:25 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:14.490 Waiting for block devices as requested 00:08:14.490 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:08:14.490 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:14.490 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:14.490 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:14.748 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:14.748 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:14.748 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:15.006 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:15.006 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:15.006 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:15.006 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:15.264 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:15.264 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:15.264 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:15.521 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:15.522 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:15.522 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:15.522 10:03:28 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:08:15.522 10:03:28 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:08:15.522 10:03:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:15.522 10:03:28 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:08:15.522 10:03:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:15.522 10:03:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:08:15.522 10:03:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:15.522 10:03:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:15.522 10:03:28 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:08:15.522 10:03:28 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:08:15.522 10:03:28 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:08:15.522 10:03:28 -- common/autotest_common.sh@1530 -- # grep oacs 00:08:15.522 10:03:28 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:08:15.522 10:03:28 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:08:15.522 10:03:28 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:08:15.522 10:03:28 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:08:15.522 10:03:28 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:08:15.522 10:03:28 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:08:15.522 10:03:28 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:08:15.522 10:03:28 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:08:15.522 10:03:28 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:08:15.522 10:03:28 -- common/autotest_common.sh@1542 -- # continue 00:08:15.522 10:03:28 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:08:15.522 10:03:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:15.522 10:03:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.779 10:03:28 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:08:15.779 10:03:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:15.779 10:03:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.779 10:03:28 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:18.305 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:18.305 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:19.240 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:19.240 10:03:32 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:08:19.240 10:03:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:19.240 10:03:32 -- common/autotest_common.sh@10 -- # set +x 00:08:19.497 10:03:32 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:08:19.497 10:03:32 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:19.497 10:03:32 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:19.497 10:03:32 -- common/autotest_common.sh@1562 -- # bdfs=() 00:08:19.497 10:03:32 -- common/autotest_common.sh@1562 -- # local bdfs 00:08:19.497 10:03:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:19.497 10:03:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:19.497 10:03:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:19.497 10:03:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:19.497 10:03:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:19.497 10:03:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:19.497 10:03:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:19.497 10:03:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:08:19.497 10:03:32 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:08:19.497 10:03:32 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:08:19.497 10:03:32 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:08:19.497 10:03:32 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:19.497 10:03:32 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:08:19.497 10:03:32 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:08:19.497 10:03:32 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:08:19.498 10:03:32 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=140640 00:08:19.498 10:03:32 -- common/autotest_common.sh@1583 -- # waitforlisten 140640 00:08:19.498 10:03:32 -- common/autotest_common.sh@819 -- # '[' -z 140640 ']' 00:08:19.498 10:03:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.498 10:03:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:19.498 10:03:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.498 10:03:32 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:19.498 10:03:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:19.498 10:03:32 -- common/autotest_common.sh@10 -- # set +x 00:08:19.498 [2024-04-24 10:03:32.658294] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:19.498 [2024-04-24 10:03:32.658342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140640 ] 00:08:19.498 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.498 [2024-04-24 10:03:32.714027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.755 [2024-04-24 10:03:32.794431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.755 [2024-04-24 10:03:32.794545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.319 10:03:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:20.319 10:03:33 -- common/autotest_common.sh@852 -- # return 0 00:08:20.319 10:03:33 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:08:20.319 10:03:33 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:08:20.319 10:03:33 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:08:23.602 nvme0n1 00:08:23.602 10:03:36 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:23.602 [2024-04-24 10:03:36.557806] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:23.602 request: 00:08:23.602 { 00:08:23.602 "nvme_ctrlr_name": "nvme0", 00:08:23.602 "password": "test", 00:08:23.602 "method": "bdev_nvme_opal_revert", 00:08:23.602 "req_id": 1 00:08:23.602 } 00:08:23.602 Got JSON-RPC error response 00:08:23.602 response: 00:08:23.602 { 00:08:23.602 "code": -32602, 00:08:23.602 "message": "Invalid parameters" 00:08:23.602 } 00:08:23.602 10:03:36 -- common/autotest_common.sh@1589 -- # true 00:08:23.602 10:03:36 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:08:23.602 10:03:36 -- common/autotest_common.sh@1593 -- # killprocess 140640 00:08:23.602 10:03:36 -- common/autotest_common.sh@926 -- # '[' -z 140640 ']' 00:08:23.602 10:03:36 -- common/autotest_common.sh@930 -- # kill -0 140640 00:08:23.602 10:03:36 -- common/autotest_common.sh@931 -- # uname 00:08:23.602 10:03:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:23.602 10:03:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140640 00:08:23.602 10:03:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:23.602 10:03:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:23.602 10:03:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140640' 00:08:23.602 killing process with pid 140640 00:08:23.602 10:03:36 -- common/autotest_common.sh@945 -- # kill 140640 00:08:23.602 10:03:36 -- common/autotest_common.sh@950 -- # wait 140640 00:08:24.976 10:03:38 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:08:24.976 10:03:38 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:08:24.976 10:03:38 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:24.976 10:03:38 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:24.976 10:03:38 -- spdk/autotest.sh@173 -- # timing_enter lib 00:08:24.976 10:03:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:24.976 10:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:24.976 10:03:38 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:24.976 10:03:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:24.976 10:03:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.976 10:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:24.976 ************************************ 00:08:24.976 START TEST env 00:08:24.976 ************************************ 00:08:24.976 10:03:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:25.234 * Looking for test storage... 00:08:25.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:25.234 10:03:38 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:25.234 10:03:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.234 10:03:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.234 10:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.234 ************************************ 00:08:25.234 START TEST env_memory 00:08:25.234 ************************************ 00:08:25.234 10:03:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:25.234 00:08:25.234 00:08:25.234 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.234 http://cunit.sourceforge.net/ 00:08:25.234 00:08:25.234 00:08:25.234 Suite: memory 00:08:25.234 Test: alloc and free memory map ...[2024-04-24 10:03:38.363312] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:25.234 passed 00:08:25.234 Test: mem map translation ...[2024-04-24 10:03:38.382438] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:25.234 [2024-04-24 10:03:38.382456] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:25.234 [2024-04-24 10:03:38.382491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:25.234 [2024-04-24 10:03:38.382498] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:25.234 passed 00:08:25.234 Test: mem map registration ...[2024-04-24 10:03:38.421639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:25.234 [2024-04-24 10:03:38.421655] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:25.234 passed 00:08:25.234 Test: mem map adjacent registrations ...passed 00:08:25.234 00:08:25.234 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.235 suites 1 1 n/a 0 0 00:08:25.235 tests 4 4 4 0 0 00:08:25.235 asserts 152 152 152 0 n/a 00:08:25.235 00:08:25.235 Elapsed time = 0.143 seconds 00:08:25.235 00:08:25.235 real 0m0.154s 00:08:25.235 user 0m0.147s 00:08:25.235 sys 0m0.007s 00:08:25.235 10:03:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.235 10:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 ************************************ 00:08:25.235 END TEST env_memory 00:08:25.235 ************************************ 00:08:25.235 10:03:38 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:25.235 10:03:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.235 10:03:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.235 10:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 ************************************ 00:08:25.235 START TEST env_vtophys 00:08:25.235 ************************************ 00:08:25.235 10:03:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:25.494 EAL: lib.eal log level changed from notice to debug 00:08:25.494 EAL: Detected lcore 0 as core 0 on socket 0 00:08:25.494 EAL: Detected lcore 1 as core 1 on socket 0 00:08:25.494 EAL: Detected lcore 2 as core 2 on socket 0 00:08:25.494 EAL: Detected lcore 3 as core 3 on socket 0 00:08:25.494 EAL: Detected lcore 4 as core 4 on socket 0 00:08:25.494 EAL: Detected lcore 5 as core 5 on socket 0 00:08:25.494 EAL: Detected lcore 6 as core 6 on socket 0 00:08:25.494 EAL: Detected lcore 7 as core 8 on socket 0 00:08:25.494 EAL: Detected lcore 8 as core 9 on socket 0 00:08:25.494 EAL: Detected lcore 9 as core 10 on socket 0 00:08:25.494 EAL: Detected lcore 10 as core 11 on socket 0 00:08:25.494 EAL: Detected lcore 11 as core 12 on socket 0 00:08:25.494 EAL: Detected lcore 12 as core 13 on socket 0 00:08:25.494 EAL: Detected lcore 13 as core 16 on socket 0 00:08:25.494 EAL: Detected lcore 14 as core 17 on socket 0 00:08:25.494 EAL: Detected lcore 15 as core 18 on socket 0 00:08:25.494 EAL: Detected lcore 16 as core 19 on socket 0 00:08:25.494 EAL: Detected lcore 17 as core 20 on socket 0 00:08:25.494 EAL: Detected lcore 18 as core 21 on socket 0 00:08:25.494 EAL: Detected lcore 19 as core 25 on socket 0 00:08:25.494 EAL: Detected lcore 20 as core 26 on socket 0 00:08:25.494 EAL: Detected lcore 21 as core 27 on socket 0 00:08:25.494 EAL: Detected lcore 22 as core 28 on socket 0 00:08:25.494 EAL: Detected lcore 23 as core 29 on socket 0 00:08:25.494 EAL: Detected lcore 24 as core 0 on socket 1 00:08:25.494 EAL: Detected lcore 25 as core 1 on socket 1 00:08:25.494 EAL: Detected lcore 26 as core 2 on socket 1 00:08:25.494 EAL: Detected lcore 27 as core 3 on socket 1 00:08:25.494 EAL: Detected lcore 28 as core 4 on socket 1 00:08:25.494 EAL: Detected lcore 29 as core 5 on socket 1 00:08:25.494 EAL: Detected lcore 30 as core 6 on socket 1 00:08:25.494 EAL: Detected lcore 31 as core 9 on socket 1 00:08:25.494 EAL: Detected lcore 32 as core 10 on socket 1 00:08:25.494 EAL: Detected lcore 33 as core 11 on socket 1 00:08:25.494 EAL: Detected lcore 34 as core 12 on socket 1 00:08:25.494 EAL: Detected lcore 35 as core 13 on socket 1 00:08:25.494 EAL: Detected lcore 36 as core 16 on socket 1 00:08:25.494 EAL: Detected lcore 37 as core 17 on socket 1 00:08:25.494 EAL: Detected lcore 38 as core 18 on socket 1 00:08:25.494 EAL: Detected lcore 39 as core 19 on socket 1 00:08:25.494 EAL: Detected lcore 40 as core 20 on socket 1 00:08:25.494 EAL: Detected lcore 41 as core 21 on socket 1 00:08:25.494 EAL: Detected lcore 42 as core 24 on socket 1 00:08:25.494 EAL: Detected lcore 43 as core 25 on socket 1 00:08:25.494 EAL: Detected lcore 44 as core 26 on socket 1 00:08:25.494 EAL: Detected lcore 45 as core 27 on socket 1 00:08:25.494 EAL: Detected lcore 46 as core 28 on socket 1 00:08:25.494 EAL: Detected lcore 47 as core 29 on socket 1 00:08:25.494 EAL: Detected lcore 48 as core 0 on socket 0 00:08:25.494 EAL: Detected lcore 49 as core 1 on socket 0 00:08:25.494 EAL: Detected lcore 50 as core 2 on socket 0 00:08:25.494 EAL: Detected lcore 51 as core 3 on socket 0 00:08:25.494 EAL: Detected lcore 52 as core 4 on socket 0 00:08:25.494 EAL: Detected lcore 53 as core 5 on socket 0 00:08:25.494 EAL: Detected lcore 54 as core 6 on socket 0 00:08:25.494 EAL: Detected lcore 55 as core 8 on socket 0 00:08:25.494 EAL: Detected lcore 56 as core 9 on socket 0 00:08:25.494 EAL: Detected lcore 57 as core 10 on socket 0 00:08:25.494 EAL: Detected lcore 58 as core 11 on socket 0 00:08:25.494 EAL: Detected lcore 59 as core 12 on socket 0 00:08:25.494 EAL: Detected lcore 60 as core 13 on socket 0 00:08:25.494 EAL: Detected lcore 61 as core 16 on socket 0 00:08:25.494 EAL: Detected lcore 62 as core 17 on socket 0 00:08:25.494 EAL: Detected lcore 63 as core 18 on socket 0 00:08:25.494 EAL: Detected lcore 64 as core 19 on socket 0 00:08:25.494 EAL: Detected lcore 65 as core 20 on socket 0 00:08:25.494 EAL: Detected lcore 66 as core 21 on socket 0 00:08:25.494 EAL: Detected lcore 67 as core 25 on socket 0 00:08:25.494 EAL: Detected lcore 68 as core 26 on socket 0 00:08:25.494 EAL: Detected lcore 69 as core 27 on socket 0 00:08:25.494 EAL: Detected lcore 70 as core 28 on socket 0 00:08:25.494 EAL: Detected lcore 71 as core 29 on socket 0 00:08:25.494 EAL: Detected lcore 72 as core 0 on socket 1 00:08:25.494 EAL: Detected lcore 73 as core 1 on socket 1 00:08:25.494 EAL: Detected lcore 74 as core 2 on socket 1 00:08:25.494 EAL: Detected lcore 75 as core 3 on socket 1 00:08:25.494 EAL: Detected lcore 76 as core 4 on socket 1 00:08:25.494 EAL: Detected lcore 77 as core 5 on socket 1 00:08:25.494 EAL: Detected lcore 78 as core 6 on socket 1 00:08:25.494 EAL: Detected lcore 79 as core 9 on socket 1 00:08:25.494 EAL: Detected lcore 80 as core 10 on socket 1 00:08:25.494 EAL: Detected lcore 81 as core 11 on socket 1 00:08:25.494 EAL: Detected lcore 82 as core 12 on socket 1 00:08:25.494 EAL: Detected lcore 83 as core 13 on socket 1 00:08:25.494 EAL: Detected lcore 84 as core 16 on socket 1 00:08:25.494 EAL: Detected lcore 85 as core 17 on socket 1 00:08:25.494 EAL: Detected lcore 86 as core 18 on socket 1 00:08:25.494 EAL: Detected lcore 87 as core 19 on socket 1 00:08:25.494 EAL: Detected lcore 88 as core 20 on socket 1 00:08:25.494 EAL: Detected lcore 89 as core 21 on socket 1 00:08:25.494 EAL: Detected lcore 90 as core 24 on socket 1 00:08:25.494 EAL: Detected lcore 91 as core 25 on socket 1 00:08:25.494 EAL: Detected lcore 92 as core 26 on socket 1 00:08:25.494 EAL: Detected lcore 93 as core 27 on socket 1 00:08:25.494 EAL: Detected lcore 94 as core 28 on socket 1 00:08:25.494 EAL: Detected lcore 95 as core 29 on socket 1 00:08:25.494 EAL: Maximum logical cores by configuration: 128 00:08:25.494 EAL: Detected CPU lcores: 96 00:08:25.494 EAL: Detected NUMA nodes: 2 00:08:25.494 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:25.494 EAL: Detected shared linkage of DPDK 00:08:25.494 EAL: No shared files mode enabled, IPC will be disabled 00:08:25.494 EAL: Bus pci wants IOVA as 'DC' 00:08:25.494 EAL: Buses did not request a specific IOVA mode. 00:08:25.494 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:25.494 EAL: Selected IOVA mode 'VA' 00:08:25.494 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.494 EAL: Probing VFIO support... 00:08:25.494 EAL: IOMMU type 1 (Type 1) is supported 00:08:25.494 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:25.494 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:25.494 EAL: VFIO support initialized 00:08:25.494 EAL: Ask a virtual area of 0x2e000 bytes 00:08:25.494 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:25.494 EAL: Setting up physically contiguous memory... 00:08:25.494 EAL: Setting maximum number of open files to 524288 00:08:25.494 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:25.494 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:25.494 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:25.494 EAL: Ask a virtual area of 0x61000 bytes 00:08:25.494 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:25.494 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:25.494 EAL: Ask a virtual area of 0x400000000 bytes 00:08:25.494 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:25.494 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:25.494 EAL: Ask a virtual area of 0x61000 bytes 00:08:25.494 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:25.494 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:25.494 EAL: Ask a virtual area of 0x400000000 bytes 00:08:25.494 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:25.494 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:25.494 EAL: Ask a virtual area of 0x61000 bytes 00:08:25.494 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:25.494 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:25.494 EAL: Ask a virtual area of 0x400000000 bytes 00:08:25.494 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:25.494 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:25.494 EAL: Ask a virtual area of 0x61000 bytes 00:08:25.494 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:25.494 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:25.494 EAL: Ask a virtual area of 0x400000000 bytes 00:08:25.494 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:25.494 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:25.494 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:25.494 EAL: Ask a virtual area of 0x61000 bytes 00:08:25.494 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:25.494 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:25.494 EAL: Ask a virtual area of 0x400000000 bytes 00:08:25.494 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:25.494 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:25.494 EAL: Ask a virtual area of 0x61000 bytes 00:08:25.494 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:25.494 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:25.494 EAL: Ask a virtual area of 0x400000000 bytes 00:08:25.494 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:25.494 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:25.494 EAL: Ask a virtual area of 0x61000 bytes 00:08:25.494 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:25.494 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:25.494 EAL: Ask a virtual area of 0x400000000 bytes 00:08:25.494 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:25.494 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:25.494 EAL: Ask a virtual area of 0x61000 bytes 00:08:25.494 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:25.494 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:25.494 EAL: Ask a virtual area of 0x400000000 bytes 00:08:25.494 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:25.494 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:25.494 EAL: Hugepages will be freed exactly as allocated. 00:08:25.494 EAL: No shared files mode enabled, IPC is disabled 00:08:25.494 EAL: No shared files mode enabled, IPC is disabled 00:08:25.494 EAL: TSC frequency is ~2300000 KHz 00:08:25.494 EAL: Main lcore 0 is ready (tid=7f6e7e42ca00;cpuset=[0]) 00:08:25.494 EAL: Trying to obtain current memory policy. 00:08:25.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.494 EAL: Restoring previous memory policy: 0 00:08:25.494 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was expanded by 2MB 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:25.495 EAL: Mem event callback 'spdk:(nil)' registered 00:08:25.495 00:08:25.495 00:08:25.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.495 http://cunit.sourceforge.net/ 00:08:25.495 00:08:25.495 00:08:25.495 Suite: components_suite 00:08:25.495 Test: vtophys_malloc_test ...passed 00:08:25.495 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:25.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.495 EAL: Restoring previous memory policy: 4 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was expanded by 4MB 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was shrunk by 4MB 00:08:25.495 EAL: Trying to obtain current memory policy. 00:08:25.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.495 EAL: Restoring previous memory policy: 4 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was expanded by 6MB 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was shrunk by 6MB 00:08:25.495 EAL: Trying to obtain current memory policy. 00:08:25.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.495 EAL: Restoring previous memory policy: 4 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was expanded by 10MB 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was shrunk by 10MB 00:08:25.495 EAL: Trying to obtain current memory policy. 00:08:25.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.495 EAL: Restoring previous memory policy: 4 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was expanded by 18MB 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was shrunk by 18MB 00:08:25.495 EAL: Trying to obtain current memory policy. 00:08:25.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.495 EAL: Restoring previous memory policy: 4 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was expanded by 34MB 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was shrunk by 34MB 00:08:25.495 EAL: Trying to obtain current memory policy. 00:08:25.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.495 EAL: Restoring previous memory policy: 4 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was expanded by 66MB 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was shrunk by 66MB 00:08:25.495 EAL: Trying to obtain current memory policy. 00:08:25.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.495 EAL: Restoring previous memory policy: 4 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was expanded by 130MB 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was shrunk by 130MB 00:08:25.495 EAL: Trying to obtain current memory policy. 00:08:25.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.495 EAL: Restoring previous memory policy: 4 00:08:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.495 EAL: request: mp_malloc_sync 00:08:25.495 EAL: No shared files mode enabled, IPC is disabled 00:08:25.495 EAL: Heap on socket 0 was expanded by 258MB 00:08:25.754 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.754 EAL: request: mp_malloc_sync 00:08:25.754 EAL: No shared files mode enabled, IPC is disabled 00:08:25.754 EAL: Heap on socket 0 was shrunk by 258MB 00:08:25.754 EAL: Trying to obtain current memory policy. 00:08:25.754 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:25.754 EAL: Restoring previous memory policy: 4 00:08:25.754 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.754 EAL: request: mp_malloc_sync 00:08:25.754 EAL: No shared files mode enabled, IPC is disabled 00:08:25.754 EAL: Heap on socket 0 was expanded by 514MB 00:08:25.754 EAL: Calling mem event callback 'spdk:(nil)' 00:08:26.012 EAL: request: mp_malloc_sync 00:08:26.012 EAL: No shared files mode enabled, IPC is disabled 00:08:26.012 EAL: Heap on socket 0 was shrunk by 514MB 00:08:26.012 EAL: Trying to obtain current memory policy. 00:08:26.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:26.012 EAL: Restoring previous memory policy: 4 00:08:26.012 EAL: Calling mem event callback 'spdk:(nil)' 00:08:26.012 EAL: request: mp_malloc_sync 00:08:26.012 EAL: No shared files mode enabled, IPC is disabled 00:08:26.012 EAL: Heap on socket 0 was expanded by 1026MB 00:08:26.271 EAL: Calling mem event callback 'spdk:(nil)' 00:08:26.529 EAL: request: mp_malloc_sync 00:08:26.529 EAL: No shared files mode enabled, IPC is disabled 00:08:26.529 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:26.529 passed 00:08:26.529 00:08:26.529 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.529 suites 1 1 n/a 0 0 00:08:26.529 tests 2 2 2 0 0 00:08:26.529 asserts 497 497 497 0 n/a 00:08:26.529 00:08:26.529 Elapsed time = 0.962 seconds 00:08:26.529 EAL: Calling mem event callback 'spdk:(nil)' 00:08:26.529 EAL: request: mp_malloc_sync 00:08:26.529 EAL: No shared files mode enabled, IPC is disabled 00:08:26.529 EAL: Heap on socket 0 was shrunk by 2MB 00:08:26.529 EAL: No shared files mode enabled, IPC is disabled 00:08:26.529 EAL: No shared files mode enabled, IPC is disabled 00:08:26.529 EAL: No shared files mode enabled, IPC is disabled 00:08:26.529 00:08:26.529 real 0m1.075s 00:08:26.529 user 0m0.638s 00:08:26.529 sys 0m0.408s 00:08:26.529 10:03:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.529 10:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.529 ************************************ 00:08:26.529 END TEST env_vtophys 00:08:26.529 ************************************ 00:08:26.529 10:03:39 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:26.529 10:03:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:26.529 10:03:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.529 10:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.529 ************************************ 00:08:26.529 START TEST env_pci 00:08:26.529 ************************************ 00:08:26.529 10:03:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:26.529 00:08:26.529 00:08:26.529 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.529 http://cunit.sourceforge.net/ 00:08:26.529 00:08:26.529 00:08:26.529 Suite: pci 00:08:26.529 Test: pci_hook ...[2024-04-24 10:03:39.643130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 141968 has claimed it 00:08:26.529 EAL: Cannot find device (10000:00:01.0) 00:08:26.529 EAL: Failed to attach device on primary process 00:08:26.529 passed 00:08:26.529 00:08:26.529 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.529 suites 1 1 n/a 0 0 00:08:26.529 tests 1 1 1 0 0 00:08:26.529 asserts 25 25 25 0 n/a 00:08:26.529 00:08:26.529 Elapsed time = 0.025 seconds 00:08:26.529 00:08:26.529 real 0m0.044s 00:08:26.529 user 0m0.016s 00:08:26.529 sys 0m0.028s 00:08:26.529 10:03:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.529 10:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.529 ************************************ 00:08:26.530 END TEST env_pci 00:08:26.530 ************************************ 00:08:26.530 10:03:39 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:26.530 10:03:39 -- env/env.sh@15 -- # uname 00:08:26.530 10:03:39 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:26.530 10:03:39 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:26.530 10:03:39 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:26.530 10:03:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:26.530 10:03:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.530 10:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.530 ************************************ 00:08:26.530 START TEST env_dpdk_post_init 00:08:26.530 ************************************ 00:08:26.530 10:03:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:26.530 EAL: Detected CPU lcores: 96 00:08:26.530 EAL: Detected NUMA nodes: 2 00:08:26.530 EAL: Detected shared linkage of DPDK 00:08:26.530 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:26.530 EAL: Selected IOVA mode 'VA' 00:08:26.530 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.530 EAL: VFIO support initialized 00:08:26.530 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:26.530 EAL: Using IOMMU type 1 (Type 1) 00:08:26.530 EAL: Ignore mapping IO port bar(1) 00:08:26.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:08:26.789 EAL: Ignore mapping IO port bar(1) 00:08:26.789 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:08:26.789 EAL: Ignore mapping IO port bar(1) 00:08:26.789 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:08:26.789 EAL: Ignore mapping IO port bar(1) 00:08:26.789 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:08:26.789 EAL: Ignore mapping IO port bar(1) 00:08:26.789 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:08:26.789 EAL: Ignore mapping IO port bar(1) 00:08:26.789 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:08:26.789 EAL: Ignore mapping IO port bar(1) 00:08:26.789 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:08:26.789 EAL: Ignore mapping IO port bar(1) 00:08:26.789 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:08:27.357 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:08:27.617 EAL: Ignore mapping IO port bar(1) 00:08:27.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:08:27.617 EAL: Ignore mapping IO port bar(1) 00:08:27.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:08:27.617 EAL: Ignore mapping IO port bar(1) 00:08:27.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:08:27.617 EAL: Ignore mapping IO port bar(1) 00:08:27.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:08:27.617 EAL: Ignore mapping IO port bar(1) 00:08:27.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:08:27.617 EAL: Ignore mapping IO port bar(1) 00:08:27.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:08:27.617 EAL: Ignore mapping IO port bar(1) 00:08:27.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:08:27.617 EAL: Ignore mapping IO port bar(1) 00:08:27.617 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:08:30.949 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:08:30.949 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:08:30.949 Starting DPDK initialization... 00:08:30.949 Starting SPDK post initialization... 00:08:30.949 SPDK NVMe probe 00:08:30.949 Attaching to 0000:5e:00.0 00:08:30.949 Attached to 0000:5e:00.0 00:08:30.949 Cleaning up... 00:08:30.949 00:08:30.949 real 0m4.312s 00:08:30.949 user 0m3.281s 00:08:30.949 sys 0m0.102s 00:08:30.949 10:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.949 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:30.949 ************************************ 00:08:30.949 END TEST env_dpdk_post_init 00:08:30.949 ************************************ 00:08:30.949 10:03:44 -- env/env.sh@26 -- # uname 00:08:30.949 10:03:44 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:30.949 10:03:44 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:30.949 10:03:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:30.949 10:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.949 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:30.949 ************************************ 00:08:30.949 START TEST env_mem_callbacks 00:08:30.949 ************************************ 00:08:30.949 10:03:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:30.949 EAL: Detected CPU lcores: 96 00:08:30.949 EAL: Detected NUMA nodes: 2 00:08:30.949 EAL: Detected shared linkage of DPDK 00:08:30.949 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:30.949 EAL: Selected IOVA mode 'VA' 00:08:30.949 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.949 EAL: VFIO support initialized 00:08:30.949 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:30.949 00:08:30.949 00:08:30.949 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.949 http://cunit.sourceforge.net/ 00:08:30.949 00:08:30.949 00:08:30.949 Suite: memory 00:08:30.949 Test: test ... 00:08:30.949 register 0x200000200000 2097152 00:08:30.949 malloc 3145728 00:08:30.949 register 0x200000400000 4194304 00:08:30.949 buf 0x200000500000 len 3145728 PASSED 00:08:30.949 malloc 64 00:08:30.949 buf 0x2000004fff40 len 64 PASSED 00:08:30.949 malloc 4194304 00:08:30.949 register 0x200000800000 6291456 00:08:30.949 buf 0x200000a00000 len 4194304 PASSED 00:08:30.949 free 0x200000500000 3145728 00:08:30.949 free 0x2000004fff40 64 00:08:30.949 unregister 0x200000400000 4194304 PASSED 00:08:30.949 free 0x200000a00000 4194304 00:08:30.949 unregister 0x200000800000 6291456 PASSED 00:08:30.949 malloc 8388608 00:08:30.949 register 0x200000400000 10485760 00:08:30.949 buf 0x200000600000 len 8388608 PASSED 00:08:30.949 free 0x200000600000 8388608 00:08:30.949 unregister 0x200000400000 10485760 PASSED 00:08:30.949 passed 00:08:30.949 00:08:30.949 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.949 suites 1 1 n/a 0 0 00:08:30.949 tests 1 1 1 0 0 00:08:30.949 asserts 15 15 15 0 n/a 00:08:30.949 00:08:30.949 Elapsed time = 0.005 seconds 00:08:30.949 00:08:30.949 real 0m0.053s 00:08:30.949 user 0m0.013s 00:08:30.949 sys 0m0.040s 00:08:30.949 10:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.949 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:30.949 ************************************ 00:08:30.949 END TEST env_mem_callbacks 00:08:30.949 ************************************ 00:08:30.949 00:08:30.949 real 0m5.919s 00:08:30.949 user 0m4.208s 00:08:30.949 sys 0m0.790s 00:08:30.949 10:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.949 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:30.949 ************************************ 00:08:30.949 END TEST env 00:08:30.949 ************************************ 00:08:30.949 10:03:44 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:30.949 10:03:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:30.949 10:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.949 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:30.949 ************************************ 00:08:30.949 START TEST rpc 00:08:30.949 ************************************ 00:08:30.949 10:03:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:31.208 * Looking for test storage... 00:08:31.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:31.208 10:03:44 -- rpc/rpc.sh@65 -- # spdk_pid=142930 00:08:31.208 10:03:44 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:31.208 10:03:44 -- rpc/rpc.sh@67 -- # waitforlisten 142930 00:08:31.208 10:03:44 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:31.208 10:03:44 -- common/autotest_common.sh@819 -- # '[' -z 142930 ']' 00:08:31.208 10:03:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.208 10:03:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:31.208 10:03:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.208 10:03:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:31.208 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:31.208 [2024-04-24 10:03:44.301638] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:31.208 [2024-04-24 10:03:44.301685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142930 ] 00:08:31.208 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.208 [2024-04-24 10:03:44.356236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.208 [2024-04-24 10:03:44.434828] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:31.208 [2024-04-24 10:03:44.434933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:31.208 [2024-04-24 10:03:44.434941] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 142930' to capture a snapshot of events at runtime. 00:08:31.208 [2024-04-24 10:03:44.434948] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid142930 for offline analysis/debug. 00:08:31.208 [2024-04-24 10:03:44.434968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.145 10:03:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:32.145 10:03:45 -- common/autotest_common.sh@852 -- # return 0 00:08:32.145 10:03:45 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:32.145 10:03:45 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:32.145 10:03:45 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:32.145 10:03:45 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:32.145 10:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.145 10:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.145 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.145 ************************************ 00:08:32.145 START TEST rpc_integrity 00:08:32.146 ************************************ 00:08:32.146 10:03:45 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:32.146 10:03:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:32.146 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.146 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.146 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.146 10:03:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:32.146 10:03:45 -- rpc/rpc.sh@13 -- # jq length 00:08:32.146 10:03:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:32.146 10:03:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:32.146 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.146 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.146 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.146 10:03:45 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:32.146 10:03:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:32.146 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.146 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.146 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.146 10:03:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:32.146 { 00:08:32.146 "name": "Malloc0", 00:08:32.146 "aliases": [ 00:08:32.146 "9af6ae55-dbb3-4ac7-8933-ae113029e713" 00:08:32.146 ], 00:08:32.146 "product_name": "Malloc disk", 00:08:32.146 "block_size": 512, 00:08:32.146 "num_blocks": 16384, 00:08:32.146 "uuid": "9af6ae55-dbb3-4ac7-8933-ae113029e713", 00:08:32.146 "assigned_rate_limits": { 00:08:32.146 "rw_ios_per_sec": 0, 00:08:32.146 "rw_mbytes_per_sec": 0, 00:08:32.146 "r_mbytes_per_sec": 0, 00:08:32.146 "w_mbytes_per_sec": 0 00:08:32.146 }, 00:08:32.146 "claimed": false, 00:08:32.146 "zoned": false, 00:08:32.146 "supported_io_types": { 00:08:32.146 "read": true, 00:08:32.146 "write": true, 00:08:32.146 "unmap": true, 00:08:32.146 "write_zeroes": true, 00:08:32.146 "flush": true, 00:08:32.146 "reset": true, 00:08:32.146 "compare": false, 00:08:32.146 "compare_and_write": false, 00:08:32.146 "abort": true, 00:08:32.146 "nvme_admin": false, 00:08:32.146 "nvme_io": false 00:08:32.146 }, 00:08:32.146 "memory_domains": [ 00:08:32.146 { 00:08:32.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.146 "dma_device_type": 2 00:08:32.146 } 00:08:32.146 ], 00:08:32.146 "driver_specific": {} 00:08:32.146 } 00:08:32.146 ]' 00:08:32.146 10:03:45 -- rpc/rpc.sh@17 -- # jq length 00:08:32.146 10:03:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:32.146 10:03:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:32.146 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.146 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.146 [2024-04-24 10:03:45.221815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:32.146 [2024-04-24 10:03:45.221846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.146 [2024-04-24 10:03:45.221857] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22944f0 00:08:32.146 [2024-04-24 10:03:45.221864] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.146 [2024-04-24 10:03:45.222929] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.146 [2024-04-24 10:03:45.222950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:32.146 Passthru0 00:08:32.146 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.146 10:03:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:32.146 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.146 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.146 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.146 10:03:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:32.146 { 00:08:32.146 "name": "Malloc0", 00:08:32.146 "aliases": [ 00:08:32.146 "9af6ae55-dbb3-4ac7-8933-ae113029e713" 00:08:32.146 ], 00:08:32.146 "product_name": "Malloc disk", 00:08:32.146 "block_size": 512, 00:08:32.146 "num_blocks": 16384, 00:08:32.146 "uuid": "9af6ae55-dbb3-4ac7-8933-ae113029e713", 00:08:32.146 "assigned_rate_limits": { 00:08:32.146 "rw_ios_per_sec": 0, 00:08:32.146 "rw_mbytes_per_sec": 0, 00:08:32.146 "r_mbytes_per_sec": 0, 00:08:32.146 "w_mbytes_per_sec": 0 00:08:32.146 }, 00:08:32.146 "claimed": true, 00:08:32.146 "claim_type": "exclusive_write", 00:08:32.146 "zoned": false, 00:08:32.146 "supported_io_types": { 00:08:32.146 "read": true, 00:08:32.146 "write": true, 00:08:32.146 "unmap": true, 00:08:32.146 "write_zeroes": true, 00:08:32.146 "flush": true, 00:08:32.146 "reset": true, 00:08:32.146 "compare": false, 00:08:32.146 "compare_and_write": false, 00:08:32.146 "abort": true, 00:08:32.146 "nvme_admin": false, 00:08:32.146 "nvme_io": false 00:08:32.146 }, 00:08:32.146 "memory_domains": [ 00:08:32.146 { 00:08:32.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.146 "dma_device_type": 2 00:08:32.146 } 00:08:32.146 ], 00:08:32.146 "driver_specific": {} 00:08:32.146 }, 00:08:32.146 { 00:08:32.146 "name": "Passthru0", 00:08:32.146 "aliases": [ 00:08:32.146 "6d0ca51b-2fff-598a-9ad2-681edc7b6cd8" 00:08:32.146 ], 00:08:32.146 "product_name": "passthru", 00:08:32.146 "block_size": 512, 00:08:32.146 "num_blocks": 16384, 00:08:32.146 "uuid": "6d0ca51b-2fff-598a-9ad2-681edc7b6cd8", 00:08:32.146 "assigned_rate_limits": { 00:08:32.146 "rw_ios_per_sec": 0, 00:08:32.146 "rw_mbytes_per_sec": 0, 00:08:32.146 "r_mbytes_per_sec": 0, 00:08:32.146 "w_mbytes_per_sec": 0 00:08:32.146 }, 00:08:32.146 "claimed": false, 00:08:32.146 "zoned": false, 00:08:32.146 "supported_io_types": { 00:08:32.146 "read": true, 00:08:32.146 "write": true, 00:08:32.146 "unmap": true, 00:08:32.146 "write_zeroes": true, 00:08:32.146 "flush": true, 00:08:32.146 "reset": true, 00:08:32.146 "compare": false, 00:08:32.146 "compare_and_write": false, 00:08:32.146 "abort": true, 00:08:32.146 "nvme_admin": false, 00:08:32.146 "nvme_io": false 00:08:32.146 }, 00:08:32.146 "memory_domains": [ 00:08:32.146 { 00:08:32.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.146 "dma_device_type": 2 00:08:32.146 } 00:08:32.146 ], 00:08:32.146 "driver_specific": { 00:08:32.146 "passthru": { 00:08:32.146 "name": "Passthru0", 00:08:32.146 "base_bdev_name": "Malloc0" 00:08:32.146 } 00:08:32.146 } 00:08:32.146 } 00:08:32.146 ]' 00:08:32.146 10:03:45 -- rpc/rpc.sh@21 -- # jq length 00:08:32.146 10:03:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:32.146 10:03:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:32.146 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.146 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.146 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.146 10:03:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:32.146 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.146 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.146 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.146 10:03:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:32.146 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.146 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.146 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.146 10:03:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:32.146 10:03:45 -- rpc/rpc.sh@26 -- # jq length 00:08:32.146 10:03:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:32.146 00:08:32.146 real 0m0.248s 00:08:32.146 user 0m0.156s 00:08:32.146 sys 0m0.028s 00:08:32.146 10:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.146 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.146 ************************************ 00:08:32.146 END TEST rpc_integrity 00:08:32.146 ************************************ 00:08:32.147 10:03:45 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:32.147 10:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.147 10:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.147 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.147 ************************************ 00:08:32.147 START TEST rpc_plugins 00:08:32.147 ************************************ 00:08:32.147 10:03:45 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:08:32.147 10:03:45 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:32.147 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.147 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.147 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.147 10:03:45 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:32.147 10:03:45 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:32.147 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.147 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.147 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.147 10:03:45 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:32.147 { 00:08:32.147 "name": "Malloc1", 00:08:32.147 "aliases": [ 00:08:32.147 "af537c06-e6c2-4687-a563-acd5c50480a6" 00:08:32.147 ], 00:08:32.147 "product_name": "Malloc disk", 00:08:32.147 "block_size": 4096, 00:08:32.147 "num_blocks": 256, 00:08:32.147 "uuid": "af537c06-e6c2-4687-a563-acd5c50480a6", 00:08:32.147 "assigned_rate_limits": { 00:08:32.147 "rw_ios_per_sec": 0, 00:08:32.147 "rw_mbytes_per_sec": 0, 00:08:32.147 "r_mbytes_per_sec": 0, 00:08:32.147 "w_mbytes_per_sec": 0 00:08:32.147 }, 00:08:32.147 "claimed": false, 00:08:32.147 "zoned": false, 00:08:32.147 "supported_io_types": { 00:08:32.147 "read": true, 00:08:32.147 "write": true, 00:08:32.147 "unmap": true, 00:08:32.147 "write_zeroes": true, 00:08:32.147 "flush": true, 00:08:32.147 "reset": true, 00:08:32.147 "compare": false, 00:08:32.147 "compare_and_write": false, 00:08:32.147 "abort": true, 00:08:32.147 "nvme_admin": false, 00:08:32.147 "nvme_io": false 00:08:32.147 }, 00:08:32.147 "memory_domains": [ 00:08:32.147 { 00:08:32.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.147 "dma_device_type": 2 00:08:32.147 } 00:08:32.147 ], 00:08:32.147 "driver_specific": {} 00:08:32.147 } 00:08:32.147 ]' 00:08:32.147 10:03:45 -- rpc/rpc.sh@32 -- # jq length 00:08:32.405 10:03:45 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:32.405 10:03:45 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:32.405 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.405 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.405 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.405 10:03:45 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:32.405 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.405 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.405 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.405 10:03:45 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:32.405 10:03:45 -- rpc/rpc.sh@36 -- # jq length 00:08:32.405 10:03:45 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:32.405 00:08:32.405 real 0m0.133s 00:08:32.405 user 0m0.090s 00:08:32.405 sys 0m0.013s 00:08:32.405 10:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.405 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.405 ************************************ 00:08:32.405 END TEST rpc_plugins 00:08:32.405 ************************************ 00:08:32.405 10:03:45 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:32.405 10:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.405 10:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.405 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.405 ************************************ 00:08:32.405 START TEST rpc_trace_cmd_test 00:08:32.405 ************************************ 00:08:32.405 10:03:45 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:08:32.405 10:03:45 -- rpc/rpc.sh@40 -- # local info 00:08:32.405 10:03:45 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:32.405 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.405 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.405 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.405 10:03:45 -- rpc/rpc.sh@42 -- # info='{ 00:08:32.405 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid142930", 00:08:32.405 "tpoint_group_mask": "0x8", 00:08:32.405 "iscsi_conn": { 00:08:32.405 "mask": "0x2", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "scsi": { 00:08:32.405 "mask": "0x4", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "bdev": { 00:08:32.405 "mask": "0x8", 00:08:32.405 "tpoint_mask": "0xffffffffffffffff" 00:08:32.405 }, 00:08:32.405 "nvmf_rdma": { 00:08:32.405 "mask": "0x10", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "nvmf_tcp": { 00:08:32.405 "mask": "0x20", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "ftl": { 00:08:32.405 "mask": "0x40", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "blobfs": { 00:08:32.405 "mask": "0x80", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "dsa": { 00:08:32.405 "mask": "0x200", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "thread": { 00:08:32.405 "mask": "0x400", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "nvme_pcie": { 00:08:32.405 "mask": "0x800", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "iaa": { 00:08:32.405 "mask": "0x1000", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "nvme_tcp": { 00:08:32.405 "mask": "0x2000", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 }, 00:08:32.405 "bdev_nvme": { 00:08:32.405 "mask": "0x4000", 00:08:32.405 "tpoint_mask": "0x0" 00:08:32.405 } 00:08:32.405 }' 00:08:32.405 10:03:45 -- rpc/rpc.sh@43 -- # jq length 00:08:32.405 10:03:45 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:32.405 10:03:45 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:32.405 10:03:45 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:32.405 10:03:45 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:32.405 10:03:45 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:32.405 10:03:45 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:32.664 10:03:45 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:32.664 10:03:45 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:32.664 10:03:45 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:32.664 00:08:32.664 real 0m0.210s 00:08:32.664 user 0m0.175s 00:08:32.664 sys 0m0.024s 00:08:32.664 10:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.664 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 ************************************ 00:08:32.664 END TEST rpc_trace_cmd_test 00:08:32.664 ************************************ 00:08:32.664 10:03:45 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:32.664 10:03:45 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:32.664 10:03:45 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:32.664 10:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.664 10:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.664 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 ************************************ 00:08:32.664 START TEST rpc_daemon_integrity 00:08:32.664 ************************************ 00:08:32.664 10:03:45 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:32.664 10:03:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:32.664 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.664 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.664 10:03:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:32.664 10:03:45 -- rpc/rpc.sh@13 -- # jq length 00:08:32.664 10:03:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:32.664 10:03:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:32.664 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.664 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.664 10:03:45 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:32.664 10:03:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:32.664 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.664 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.664 10:03:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:32.664 { 00:08:32.664 "name": "Malloc2", 00:08:32.664 "aliases": [ 00:08:32.664 "9f47d5c2-50ba-41eb-a3f2-0cb6ea7b85e4" 00:08:32.664 ], 00:08:32.664 "product_name": "Malloc disk", 00:08:32.664 "block_size": 512, 00:08:32.664 "num_blocks": 16384, 00:08:32.664 "uuid": "9f47d5c2-50ba-41eb-a3f2-0cb6ea7b85e4", 00:08:32.664 "assigned_rate_limits": { 00:08:32.664 "rw_ios_per_sec": 0, 00:08:32.664 "rw_mbytes_per_sec": 0, 00:08:32.664 "r_mbytes_per_sec": 0, 00:08:32.664 "w_mbytes_per_sec": 0 00:08:32.664 }, 00:08:32.664 "claimed": false, 00:08:32.664 "zoned": false, 00:08:32.664 "supported_io_types": { 00:08:32.664 "read": true, 00:08:32.664 "write": true, 00:08:32.664 "unmap": true, 00:08:32.664 "write_zeroes": true, 00:08:32.664 "flush": true, 00:08:32.664 "reset": true, 00:08:32.664 "compare": false, 00:08:32.664 "compare_and_write": false, 00:08:32.664 "abort": true, 00:08:32.664 "nvme_admin": false, 00:08:32.664 "nvme_io": false 00:08:32.664 }, 00:08:32.664 "memory_domains": [ 00:08:32.664 { 00:08:32.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.664 "dma_device_type": 2 00:08:32.664 } 00:08:32.664 ], 00:08:32.664 "driver_specific": {} 00:08:32.664 } 00:08:32.664 ]' 00:08:32.664 10:03:45 -- rpc/rpc.sh@17 -- # jq length 00:08:32.664 10:03:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:32.664 10:03:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:32.664 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.664 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 [2024-04-24 10:03:45.915708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:32.664 [2024-04-24 10:03:45.915735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.664 [2024-04-24 10:03:45.915748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2294090 00:08:32.664 [2024-04-24 10:03:45.915754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.664 [2024-04-24 10:03:45.916695] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.664 [2024-04-24 10:03:45.916716] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:32.664 Passthru0 00:08:32.664 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.664 10:03:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:32.664 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.664 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.664 10:03:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:32.664 { 00:08:32.664 "name": "Malloc2", 00:08:32.664 "aliases": [ 00:08:32.664 "9f47d5c2-50ba-41eb-a3f2-0cb6ea7b85e4" 00:08:32.664 ], 00:08:32.664 "product_name": "Malloc disk", 00:08:32.664 "block_size": 512, 00:08:32.664 "num_blocks": 16384, 00:08:32.664 "uuid": "9f47d5c2-50ba-41eb-a3f2-0cb6ea7b85e4", 00:08:32.664 "assigned_rate_limits": { 00:08:32.664 "rw_ios_per_sec": 0, 00:08:32.664 "rw_mbytes_per_sec": 0, 00:08:32.664 "r_mbytes_per_sec": 0, 00:08:32.664 "w_mbytes_per_sec": 0 00:08:32.664 }, 00:08:32.664 "claimed": true, 00:08:32.664 "claim_type": "exclusive_write", 00:08:32.664 "zoned": false, 00:08:32.664 "supported_io_types": { 00:08:32.664 "read": true, 00:08:32.664 "write": true, 00:08:32.664 "unmap": true, 00:08:32.664 "write_zeroes": true, 00:08:32.664 "flush": true, 00:08:32.664 "reset": true, 00:08:32.664 "compare": false, 00:08:32.664 "compare_and_write": false, 00:08:32.664 "abort": true, 00:08:32.664 "nvme_admin": false, 00:08:32.664 "nvme_io": false 00:08:32.664 }, 00:08:32.664 "memory_domains": [ 00:08:32.664 { 00:08:32.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.664 "dma_device_type": 2 00:08:32.664 } 00:08:32.664 ], 00:08:32.664 "driver_specific": {} 00:08:32.664 }, 00:08:32.664 { 00:08:32.664 "name": "Passthru0", 00:08:32.664 "aliases": [ 00:08:32.664 "4a65dd82-33fe-5535-9262-8044b857e61f" 00:08:32.664 ], 00:08:32.664 "product_name": "passthru", 00:08:32.664 "block_size": 512, 00:08:32.664 "num_blocks": 16384, 00:08:32.664 "uuid": "4a65dd82-33fe-5535-9262-8044b857e61f", 00:08:32.664 "assigned_rate_limits": { 00:08:32.664 "rw_ios_per_sec": 0, 00:08:32.664 "rw_mbytes_per_sec": 0, 00:08:32.664 "r_mbytes_per_sec": 0, 00:08:32.664 "w_mbytes_per_sec": 0 00:08:32.664 }, 00:08:32.664 "claimed": false, 00:08:32.664 "zoned": false, 00:08:32.664 "supported_io_types": { 00:08:32.664 "read": true, 00:08:32.664 "write": true, 00:08:32.664 "unmap": true, 00:08:32.664 "write_zeroes": true, 00:08:32.664 "flush": true, 00:08:32.664 "reset": true, 00:08:32.664 "compare": false, 00:08:32.664 "compare_and_write": false, 00:08:32.664 "abort": true, 00:08:32.664 "nvme_admin": false, 00:08:32.664 "nvme_io": false 00:08:32.664 }, 00:08:32.664 "memory_domains": [ 00:08:32.664 { 00:08:32.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.664 "dma_device_type": 2 00:08:32.664 } 00:08:32.664 ], 00:08:32.664 "driver_specific": { 00:08:32.664 "passthru": { 00:08:32.664 "name": "Passthru0", 00:08:32.664 "base_bdev_name": "Malloc2" 00:08:32.664 } 00:08:32.664 } 00:08:32.664 } 00:08:32.664 ]' 00:08:32.664 10:03:45 -- rpc/rpc.sh@21 -- # jq length 00:08:32.924 10:03:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:32.924 10:03:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:32.924 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.924 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.924 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.924 10:03:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:32.924 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.924 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.924 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.924 10:03:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:32.924 10:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.924 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.924 10:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.924 10:03:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:32.924 10:03:45 -- rpc/rpc.sh@26 -- # jq length 00:08:32.924 10:03:46 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:32.924 00:08:32.924 real 0m0.230s 00:08:32.924 user 0m0.144s 00:08:32.924 sys 0m0.021s 00:08:32.924 10:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.924 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:32.924 ************************************ 00:08:32.924 END TEST rpc_daemon_integrity 00:08:32.924 ************************************ 00:08:32.924 10:03:46 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:32.924 10:03:46 -- rpc/rpc.sh@84 -- # killprocess 142930 00:08:32.924 10:03:46 -- common/autotest_common.sh@926 -- # '[' -z 142930 ']' 00:08:32.924 10:03:46 -- common/autotest_common.sh@930 -- # kill -0 142930 00:08:32.924 10:03:46 -- common/autotest_common.sh@931 -- # uname 00:08:32.924 10:03:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:32.924 10:03:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142930 00:08:32.924 10:03:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:32.924 10:03:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:32.924 10:03:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142930' 00:08:32.924 killing process with pid 142930 00:08:32.924 10:03:46 -- common/autotest_common.sh@945 -- # kill 142930 00:08:32.924 10:03:46 -- common/autotest_common.sh@950 -- # wait 142930 00:08:33.183 00:08:33.183 real 0m2.250s 00:08:33.183 user 0m2.888s 00:08:33.183 sys 0m0.529s 00:08:33.183 10:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.183 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:33.183 ************************************ 00:08:33.183 END TEST rpc 00:08:33.183 ************************************ 00:08:33.443 10:03:46 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:33.443 10:03:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:33.443 10:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.443 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:33.443 ************************************ 00:08:33.443 START TEST rpc_client 00:08:33.443 ************************************ 00:08:33.443 10:03:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:33.443 * Looking for test storage... 00:08:33.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:33.443 10:03:46 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:33.443 OK 00:08:33.443 10:03:46 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:33.443 00:08:33.443 real 0m0.103s 00:08:33.443 user 0m0.045s 00:08:33.443 sys 0m0.065s 00:08:33.443 10:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.443 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:33.443 ************************************ 00:08:33.443 END TEST rpc_client 00:08:33.443 ************************************ 00:08:33.443 10:03:46 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:33.443 10:03:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:33.443 10:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.443 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:33.443 ************************************ 00:08:33.443 START TEST json_config 00:08:33.443 ************************************ 00:08:33.443 10:03:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:33.443 10:03:46 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.443 10:03:46 -- nvmf/common.sh@7 -- # uname -s 00:08:33.443 10:03:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.443 10:03:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.443 10:03:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.443 10:03:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.443 10:03:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.443 10:03:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.443 10:03:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.443 10:03:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.443 10:03:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.443 10:03:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.443 10:03:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:33.443 10:03:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:33.443 10:03:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.443 10:03:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.443 10:03:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:33.443 10:03:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.443 10:03:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.443 10:03:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.443 10:03:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.443 10:03:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.443 10:03:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.443 10:03:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.443 10:03:46 -- paths/export.sh@5 -- # export PATH 00:08:33.443 10:03:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.443 10:03:46 -- nvmf/common.sh@46 -- # : 0 00:08:33.443 10:03:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:33.443 10:03:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:33.443 10:03:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:33.443 10:03:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.443 10:03:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.443 10:03:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:33.443 10:03:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:33.443 10:03:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:33.443 10:03:46 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:33.443 10:03:46 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:33.443 10:03:46 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:33.443 10:03:46 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:33.443 10:03:46 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:08:33.443 10:03:46 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:33.443 10:03:46 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:33.443 10:03:46 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:33.443 10:03:46 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:33.443 10:03:46 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:33.444 10:03:46 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:33.444 10:03:46 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:33.444 10:03:46 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:33.444 10:03:46 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:33.444 10:03:46 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:33.444 INFO: JSON configuration test init 00:08:33.444 10:03:46 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:33.444 10:03:46 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:33.444 10:03:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:33.444 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:33.444 10:03:46 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:33.444 10:03:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:33.444 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:33.444 10:03:46 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:33.444 10:03:46 -- json_config/json_config.sh@98 -- # local app=target 00:08:33.444 10:03:46 -- json_config/json_config.sh@99 -- # shift 00:08:33.444 10:03:46 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:33.444 10:03:46 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:33.444 10:03:46 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:33.444 10:03:46 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:33.444 10:03:46 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:33.444 10:03:46 -- json_config/json_config.sh@111 -- # app_pid[$app]=143602 00:08:33.444 10:03:46 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:33.444 Waiting for target to run... 00:08:33.444 10:03:46 -- json_config/json_config.sh@114 -- # waitforlisten 143602 /var/tmp/spdk_tgt.sock 00:08:33.444 10:03:46 -- common/autotest_common.sh@819 -- # '[' -z 143602 ']' 00:08:33.444 10:03:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:33.444 10:03:46 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:33.444 10:03:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:33.444 10:03:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:33.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:33.444 10:03:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:33.444 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:33.703 [2024-04-24 10:03:46.751356] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:33.703 [2024-04-24 10:03:46.751407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143602 ] 00:08:33.703 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.962 [2024-04-24 10:03:47.019452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.962 [2024-04-24 10:03:47.085180] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.962 [2024-04-24 10:03:47.085273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.529 10:03:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:34.529 10:03:47 -- common/autotest_common.sh@852 -- # return 0 00:08:34.529 10:03:47 -- json_config/json_config.sh@115 -- # echo '' 00:08:34.529 00:08:34.529 10:03:47 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:34.529 10:03:47 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:34.529 10:03:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:34.529 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:34.529 10:03:47 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:34.530 10:03:47 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:34.530 10:03:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:34.530 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:08:34.530 10:03:47 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:34.530 10:03:47 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:34.530 10:03:47 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:37.820 10:03:50 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:37.820 10:03:50 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:37.820 10:03:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:37.820 10:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.820 10:03:50 -- json_config/json_config.sh@48 -- # local ret=0 00:08:37.820 10:03:50 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:37.820 10:03:50 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:37.820 10:03:50 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:37.820 10:03:50 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:37.820 10:03:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:37.820 10:03:50 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:37.820 10:03:50 -- json_config/json_config.sh@51 -- # local get_types 00:08:37.820 10:03:50 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:37.820 10:03:50 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:37.820 10:03:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:37.820 10:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.820 10:03:50 -- json_config/json_config.sh@58 -- # return 0 00:08:37.820 10:03:50 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:08:37.820 10:03:50 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:37.820 10:03:50 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:37.820 10:03:50 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:08:37.820 10:03:50 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:08:37.820 10:03:50 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:08:37.820 10:03:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:37.820 10:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.820 10:03:50 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:37.820 10:03:50 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:08:37.820 10:03:50 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:08:37.820 10:03:50 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:37.820 10:03:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:37.820 MallocForNvmf0 00:08:37.820 10:03:51 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:37.820 10:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:38.079 MallocForNvmf1 00:08:38.079 10:03:51 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:38.079 10:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:38.079 [2024-04-24 10:03:51.350895] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.338 10:03:51 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:38.338 10:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:38.338 10:03:51 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:38.338 10:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:38.596 10:03:51 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:38.596 10:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:38.596 10:03:51 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:38.596 10:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:38.854 [2024-04-24 10:03:52.016996] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:38.854 10:03:52 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:08:38.854 10:03:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:38.854 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.854 10:03:52 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:38.854 10:03:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:38.854 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.854 10:03:52 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:38.854 10:03:52 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:38.854 10:03:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:39.113 MallocBdevForConfigChangeCheck 00:08:39.113 10:03:52 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:39.113 10:03:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:39.113 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:08:39.113 10:03:52 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:39.113 10:03:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:39.371 10:03:52 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:39.371 INFO: shutting down applications... 00:08:39.371 10:03:52 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:39.371 10:03:52 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:39.371 10:03:52 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:39.371 10:03:52 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:41.273 Calling clear_iscsi_subsystem 00:08:41.273 Calling clear_nvmf_subsystem 00:08:41.273 Calling clear_nbd_subsystem 00:08:41.273 Calling clear_ublk_subsystem 00:08:41.273 Calling clear_vhost_blk_subsystem 00:08:41.273 Calling clear_vhost_scsi_subsystem 00:08:41.273 Calling clear_scheduler_subsystem 00:08:41.273 Calling clear_bdev_subsystem 00:08:41.273 Calling clear_accel_subsystem 00:08:41.273 Calling clear_vmd_subsystem 00:08:41.273 Calling clear_sock_subsystem 00:08:41.273 Calling clear_iobuf_subsystem 00:08:41.273 10:03:54 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:41.273 10:03:54 -- json_config/json_config.sh@396 -- # count=100 00:08:41.273 10:03:54 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:41.273 10:03:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:41.273 10:03:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:41.273 10:03:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:41.273 10:03:54 -- json_config/json_config.sh@398 -- # break 00:08:41.273 10:03:54 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:41.273 10:03:54 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:41.273 10:03:54 -- json_config/json_config.sh@120 -- # local app=target 00:08:41.273 10:03:54 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:41.273 10:03:54 -- json_config/json_config.sh@124 -- # [[ -n 143602 ]] 00:08:41.273 10:03:54 -- json_config/json_config.sh@127 -- # kill -SIGINT 143602 00:08:41.273 10:03:54 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:41.273 10:03:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:41.273 10:03:54 -- json_config/json_config.sh@130 -- # kill -0 143602 00:08:41.273 10:03:54 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:41.841 10:03:54 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:41.841 10:03:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:41.841 10:03:54 -- json_config/json_config.sh@130 -- # kill -0 143602 00:08:41.841 10:03:54 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:41.841 10:03:54 -- json_config/json_config.sh@132 -- # break 00:08:41.841 10:03:54 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:41.841 10:03:54 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:41.841 SPDK target shutdown done 00:08:41.841 10:03:54 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:41.841 INFO: relaunching applications... 00:08:41.841 10:03:54 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:41.841 10:03:54 -- json_config/json_config.sh@98 -- # local app=target 00:08:41.841 10:03:54 -- json_config/json_config.sh@99 -- # shift 00:08:41.841 10:03:54 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:41.841 10:03:54 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:41.841 10:03:54 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:41.841 10:03:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:41.841 10:03:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:41.841 10:03:54 -- json_config/json_config.sh@111 -- # app_pid[$app]=145130 00:08:41.842 10:03:54 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:41.842 Waiting for target to run... 00:08:41.842 10:03:54 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:41.842 10:03:54 -- json_config/json_config.sh@114 -- # waitforlisten 145130 /var/tmp/spdk_tgt.sock 00:08:41.842 10:03:54 -- common/autotest_common.sh@819 -- # '[' -z 145130 ']' 00:08:41.842 10:03:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:41.842 10:03:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:41.842 10:03:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:41.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:41.842 10:03:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:41.842 10:03:54 -- common/autotest_common.sh@10 -- # set +x 00:08:41.842 [2024-04-24 10:03:54.960691] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:41.842 [2024-04-24 10:03:54.960746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145130 ] 00:08:41.842 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.410 [2024-04-24 10:03:55.403147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.410 [2024-04-24 10:03:55.486259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:42.410 [2024-04-24 10:03:55.486361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.699 [2024-04-24 10:03:58.485282] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.700 [2024-04-24 10:03:58.517598] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:45.958 10:03:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:45.958 10:03:59 -- common/autotest_common.sh@852 -- # return 0 00:08:45.958 10:03:59 -- json_config/json_config.sh@115 -- # echo '' 00:08:45.958 00:08:45.958 10:03:59 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:45.958 10:03:59 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:45.958 INFO: Checking if target configuration is the same... 00:08:45.958 10:03:59 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:45.958 10:03:59 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:45.958 10:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:45.958 + '[' 2 -ne 2 ']' 00:08:45.958 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:45.958 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:45.958 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:45.958 +++ basename /dev/fd/62 00:08:45.958 ++ mktemp /tmp/62.XXX 00:08:45.958 + tmp_file_1=/tmp/62.55C 00:08:45.958 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:45.958 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:45.958 + tmp_file_2=/tmp/spdk_tgt_config.json.HUL 00:08:45.958 + ret=0 00:08:45.958 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:46.217 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:46.217 + diff -u /tmp/62.55C /tmp/spdk_tgt_config.json.HUL 00:08:46.217 + echo 'INFO: JSON config files are the same' 00:08:46.217 INFO: JSON config files are the same 00:08:46.217 + rm /tmp/62.55C /tmp/spdk_tgt_config.json.HUL 00:08:46.217 + exit 0 00:08:46.217 10:03:59 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:46.217 10:03:59 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:46.217 INFO: changing configuration and checking if this can be detected... 00:08:46.217 10:03:59 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:46.217 10:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:46.476 10:03:59 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:46.476 10:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:46.476 10:03:59 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:46.476 + '[' 2 -ne 2 ']' 00:08:46.476 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:46.476 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:46.476 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:46.476 +++ basename /dev/fd/62 00:08:46.476 ++ mktemp /tmp/62.XXX 00:08:46.476 + tmp_file_1=/tmp/62.14H 00:08:46.476 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:46.476 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:46.476 + tmp_file_2=/tmp/spdk_tgt_config.json.Auw 00:08:46.476 + ret=0 00:08:46.476 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:46.736 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:46.736 + diff -u /tmp/62.14H /tmp/spdk_tgt_config.json.Auw 00:08:46.736 + ret=1 00:08:46.736 + echo '=== Start of file: /tmp/62.14H ===' 00:08:46.736 + cat /tmp/62.14H 00:08:46.736 + echo '=== End of file: /tmp/62.14H ===' 00:08:46.736 + echo '' 00:08:46.736 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Auw ===' 00:08:46.736 + cat /tmp/spdk_tgt_config.json.Auw 00:08:46.736 + echo '=== End of file: /tmp/spdk_tgt_config.json.Auw ===' 00:08:46.736 + echo '' 00:08:46.736 + rm /tmp/62.14H /tmp/spdk_tgt_config.json.Auw 00:08:46.736 + exit 1 00:08:46.736 10:03:59 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:46.736 INFO: configuration change detected. 00:08:46.736 10:03:59 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:46.736 10:03:59 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:46.736 10:03:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.736 10:03:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 10:03:59 -- json_config/json_config.sh@360 -- # local ret=0 00:08:46.736 10:03:59 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:46.736 10:03:59 -- json_config/json_config.sh@370 -- # [[ -n 145130 ]] 00:08:46.736 10:03:59 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:46.736 10:03:59 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:46.736 10:03:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.736 10:03:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 10:03:59 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:08:46.736 10:03:59 -- json_config/json_config.sh@246 -- # uname -s 00:08:46.736 10:03:59 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:46.736 10:03:59 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:46.736 10:03:59 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:46.736 10:03:59 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:46.736 10:03:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:46.736 10:03:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 10:03:59 -- json_config/json_config.sh@376 -- # killprocess 145130 00:08:46.736 10:03:59 -- common/autotest_common.sh@926 -- # '[' -z 145130 ']' 00:08:46.736 10:03:59 -- common/autotest_common.sh@930 -- # kill -0 145130 00:08:46.736 10:03:59 -- common/autotest_common.sh@931 -- # uname 00:08:46.736 10:03:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:46.736 10:03:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145130 00:08:46.736 10:03:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:46.736 10:03:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:46.736 10:03:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145130' 00:08:46.736 killing process with pid 145130 00:08:46.736 10:03:59 -- common/autotest_common.sh@945 -- # kill 145130 00:08:46.736 10:03:59 -- common/autotest_common.sh@950 -- # wait 145130 00:08:48.640 10:04:01 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:48.640 10:04:01 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:48.640 10:04:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:48.640 10:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.640 10:04:01 -- json_config/json_config.sh@381 -- # return 0 00:08:48.640 10:04:01 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:48.640 INFO: Success 00:08:48.640 00:08:48.640 real 0m14.943s 00:08:48.640 user 0m16.023s 00:08:48.640 sys 0m1.884s 00:08:48.640 10:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.640 10:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.640 ************************************ 00:08:48.640 END TEST json_config 00:08:48.640 ************************************ 00:08:48.640 10:04:01 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:48.640 10:04:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:48.640 10:04:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.640 10:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.640 ************************************ 00:08:48.640 START TEST json_config_extra_key 00:08:48.640 ************************************ 00:08:48.640 10:04:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:48.640 10:04:01 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.640 10:04:01 -- nvmf/common.sh@7 -- # uname -s 00:08:48.640 10:04:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.640 10:04:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.640 10:04:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.640 10:04:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.640 10:04:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.640 10:04:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.640 10:04:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.640 10:04:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.640 10:04:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.640 10:04:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.640 10:04:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:48.640 10:04:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:48.640 10:04:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.640 10:04:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.640 10:04:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:48.640 10:04:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.640 10:04:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.640 10:04:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.640 10:04:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.640 10:04:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.640 10:04:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.641 10:04:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.641 10:04:01 -- paths/export.sh@5 -- # export PATH 00:08:48.641 10:04:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.641 10:04:01 -- nvmf/common.sh@46 -- # : 0 00:08:48.641 10:04:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:48.641 10:04:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:48.641 10:04:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:48.641 10:04:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.641 10:04:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.641 10:04:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:48.641 10:04:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:48.641 10:04:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:48.641 INFO: launching applications... 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=146421 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:48.641 Waiting for target to run... 00:08:48.641 10:04:01 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 146421 /var/tmp/spdk_tgt.sock 00:08:48.641 10:04:01 -- common/autotest_common.sh@819 -- # '[' -z 146421 ']' 00:08:48.641 10:04:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:48.641 10:04:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:48.641 10:04:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:48.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:48.641 10:04:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:48.641 10:04:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.641 [2024-04-24 10:04:01.707171] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:48.641 [2024-04-24 10:04:01.707223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146421 ] 00:08:48.641 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.900 [2024-04-24 10:04:01.970122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.900 [2024-04-24 10:04:02.037124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:48.900 [2024-04-24 10:04:02.037220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.468 10:04:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:49.468 10:04:02 -- common/autotest_common.sh@852 -- # return 0 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:49.468 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:49.468 INFO: shutting down applications... 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 146421 ]] 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 146421 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@50 -- # kill -0 146421 00:08:49.468 10:04:02 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:50.036 10:04:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:50.036 10:04:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:50.036 10:04:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 146421 00:08:50.036 10:04:03 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:50.036 10:04:03 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:50.036 10:04:03 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:50.036 10:04:03 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:50.036 SPDK target shutdown done 00:08:50.036 10:04:03 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:50.036 Success 00:08:50.036 00:08:50.036 real 0m1.425s 00:08:50.036 user 0m1.251s 00:08:50.036 sys 0m0.342s 00:08:50.036 10:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.036 10:04:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.036 ************************************ 00:08:50.036 END TEST json_config_extra_key 00:08:50.036 ************************************ 00:08:50.036 10:04:03 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:50.036 10:04:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:50.036 10:04:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.036 10:04:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.036 ************************************ 00:08:50.036 START TEST alias_rpc 00:08:50.036 ************************************ 00:08:50.036 10:04:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:50.036 * Looking for test storage... 00:08:50.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:50.036 10:04:03 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:50.036 10:04:03 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=146699 00:08:50.036 10:04:03 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:50.036 10:04:03 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 146699 00:08:50.036 10:04:03 -- common/autotest_common.sh@819 -- # '[' -z 146699 ']' 00:08:50.036 10:04:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.036 10:04:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:50.036 10:04:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.036 10:04:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:50.036 10:04:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.036 [2024-04-24 10:04:03.181578] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:50.036 [2024-04-24 10:04:03.181625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146699 ] 00:08:50.036 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.036 [2024-04-24 10:04:03.235111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.036 [2024-04-24 10:04:03.306762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.036 [2024-04-24 10:04:03.306882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.974 10:04:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:50.974 10:04:03 -- common/autotest_common.sh@852 -- # return 0 00:08:50.974 10:04:03 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:50.974 10:04:04 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 146699 00:08:50.974 10:04:04 -- common/autotest_common.sh@926 -- # '[' -z 146699 ']' 00:08:50.974 10:04:04 -- common/autotest_common.sh@930 -- # kill -0 146699 00:08:50.974 10:04:04 -- common/autotest_common.sh@931 -- # uname 00:08:50.974 10:04:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:50.974 10:04:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146699 00:08:50.974 10:04:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:50.974 10:04:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:50.974 10:04:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146699' 00:08:50.974 killing process with pid 146699 00:08:50.974 10:04:04 -- common/autotest_common.sh@945 -- # kill 146699 00:08:50.974 10:04:04 -- common/autotest_common.sh@950 -- # wait 146699 00:08:51.541 00:08:51.541 real 0m1.491s 00:08:51.541 user 0m1.631s 00:08:51.541 sys 0m0.380s 00:08:51.541 10:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.541 10:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.541 ************************************ 00:08:51.541 END TEST alias_rpc 00:08:51.542 ************************************ 00:08:51.542 10:04:04 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:08:51.542 10:04:04 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:51.542 10:04:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:51.542 10:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:51.542 10:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.542 ************************************ 00:08:51.542 START TEST spdkcli_tcp 00:08:51.542 ************************************ 00:08:51.542 10:04:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:51.542 * Looking for test storage... 00:08:51.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:51.542 10:04:04 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:51.542 10:04:04 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:51.542 10:04:04 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:51.542 10:04:04 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:51.542 10:04:04 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:51.542 10:04:04 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:51.542 10:04:04 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:51.542 10:04:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:51.542 10:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.542 10:04:04 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=146990 00:08:51.542 10:04:04 -- spdkcli/tcp.sh@27 -- # waitforlisten 146990 00:08:51.542 10:04:04 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:51.542 10:04:04 -- common/autotest_common.sh@819 -- # '[' -z 146990 ']' 00:08:51.542 10:04:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.542 10:04:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:51.542 10:04:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.542 10:04:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:51.542 10:04:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.542 [2024-04-24 10:04:04.710538] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:51.542 [2024-04-24 10:04:04.710586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146990 ] 00:08:51.542 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.542 [2024-04-24 10:04:04.763628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.801 [2024-04-24 10:04:04.836443] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:51.801 [2024-04-24 10:04:04.836585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.801 [2024-04-24 10:04:04.836588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.368 10:04:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:52.368 10:04:05 -- common/autotest_common.sh@852 -- # return 0 00:08:52.368 10:04:05 -- spdkcli/tcp.sh@31 -- # socat_pid=147129 00:08:52.368 10:04:05 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:52.368 10:04:05 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:52.626 [ 00:08:52.626 "bdev_malloc_delete", 00:08:52.626 "bdev_malloc_create", 00:08:52.626 "bdev_null_resize", 00:08:52.626 "bdev_null_delete", 00:08:52.626 "bdev_null_create", 00:08:52.626 "bdev_nvme_cuse_unregister", 00:08:52.626 "bdev_nvme_cuse_register", 00:08:52.626 "bdev_opal_new_user", 00:08:52.626 "bdev_opal_set_lock_state", 00:08:52.626 "bdev_opal_delete", 00:08:52.626 "bdev_opal_get_info", 00:08:52.626 "bdev_opal_create", 00:08:52.626 "bdev_nvme_opal_revert", 00:08:52.626 "bdev_nvme_opal_init", 00:08:52.626 "bdev_nvme_send_cmd", 00:08:52.626 "bdev_nvme_get_path_iostat", 00:08:52.626 "bdev_nvme_get_mdns_discovery_info", 00:08:52.626 "bdev_nvme_stop_mdns_discovery", 00:08:52.626 "bdev_nvme_start_mdns_discovery", 00:08:52.626 "bdev_nvme_set_multipath_policy", 00:08:52.626 "bdev_nvme_set_preferred_path", 00:08:52.626 "bdev_nvme_get_io_paths", 00:08:52.626 "bdev_nvme_remove_error_injection", 00:08:52.626 "bdev_nvme_add_error_injection", 00:08:52.626 "bdev_nvme_get_discovery_info", 00:08:52.626 "bdev_nvme_stop_discovery", 00:08:52.626 "bdev_nvme_start_discovery", 00:08:52.626 "bdev_nvme_get_controller_health_info", 00:08:52.626 "bdev_nvme_disable_controller", 00:08:52.626 "bdev_nvme_enable_controller", 00:08:52.626 "bdev_nvme_reset_controller", 00:08:52.626 "bdev_nvme_get_transport_statistics", 00:08:52.626 "bdev_nvme_apply_firmware", 00:08:52.626 "bdev_nvme_detach_controller", 00:08:52.626 "bdev_nvme_get_controllers", 00:08:52.626 "bdev_nvme_attach_controller", 00:08:52.626 "bdev_nvme_set_hotplug", 00:08:52.626 "bdev_nvme_set_options", 00:08:52.626 "bdev_passthru_delete", 00:08:52.626 "bdev_passthru_create", 00:08:52.626 "bdev_lvol_grow_lvstore", 00:08:52.626 "bdev_lvol_get_lvols", 00:08:52.626 "bdev_lvol_get_lvstores", 00:08:52.626 "bdev_lvol_delete", 00:08:52.626 "bdev_lvol_set_read_only", 00:08:52.626 "bdev_lvol_resize", 00:08:52.626 "bdev_lvol_decouple_parent", 00:08:52.626 "bdev_lvol_inflate", 00:08:52.626 "bdev_lvol_rename", 00:08:52.626 "bdev_lvol_clone_bdev", 00:08:52.626 "bdev_lvol_clone", 00:08:52.626 "bdev_lvol_snapshot", 00:08:52.626 "bdev_lvol_create", 00:08:52.626 "bdev_lvol_delete_lvstore", 00:08:52.626 "bdev_lvol_rename_lvstore", 00:08:52.626 "bdev_lvol_create_lvstore", 00:08:52.626 "bdev_raid_set_options", 00:08:52.626 "bdev_raid_remove_base_bdev", 00:08:52.626 "bdev_raid_add_base_bdev", 00:08:52.626 "bdev_raid_delete", 00:08:52.626 "bdev_raid_create", 00:08:52.626 "bdev_raid_get_bdevs", 00:08:52.626 "bdev_error_inject_error", 00:08:52.626 "bdev_error_delete", 00:08:52.626 "bdev_error_create", 00:08:52.626 "bdev_split_delete", 00:08:52.626 "bdev_split_create", 00:08:52.626 "bdev_delay_delete", 00:08:52.626 "bdev_delay_create", 00:08:52.626 "bdev_delay_update_latency", 00:08:52.626 "bdev_zone_block_delete", 00:08:52.626 "bdev_zone_block_create", 00:08:52.626 "blobfs_create", 00:08:52.626 "blobfs_detect", 00:08:52.626 "blobfs_set_cache_size", 00:08:52.626 "bdev_aio_delete", 00:08:52.626 "bdev_aio_rescan", 00:08:52.626 "bdev_aio_create", 00:08:52.626 "bdev_ftl_set_property", 00:08:52.626 "bdev_ftl_get_properties", 00:08:52.626 "bdev_ftl_get_stats", 00:08:52.626 "bdev_ftl_unmap", 00:08:52.626 "bdev_ftl_unload", 00:08:52.626 "bdev_ftl_delete", 00:08:52.626 "bdev_ftl_load", 00:08:52.626 "bdev_ftl_create", 00:08:52.626 "bdev_virtio_attach_controller", 00:08:52.626 "bdev_virtio_scsi_get_devices", 00:08:52.626 "bdev_virtio_detach_controller", 00:08:52.626 "bdev_virtio_blk_set_hotplug", 00:08:52.626 "bdev_iscsi_delete", 00:08:52.626 "bdev_iscsi_create", 00:08:52.626 "bdev_iscsi_set_options", 00:08:52.626 "accel_error_inject_error", 00:08:52.626 "ioat_scan_accel_module", 00:08:52.626 "dsa_scan_accel_module", 00:08:52.626 "iaa_scan_accel_module", 00:08:52.626 "iscsi_set_options", 00:08:52.626 "iscsi_get_auth_groups", 00:08:52.626 "iscsi_auth_group_remove_secret", 00:08:52.626 "iscsi_auth_group_add_secret", 00:08:52.626 "iscsi_delete_auth_group", 00:08:52.626 "iscsi_create_auth_group", 00:08:52.626 "iscsi_set_discovery_auth", 00:08:52.626 "iscsi_get_options", 00:08:52.626 "iscsi_target_node_request_logout", 00:08:52.626 "iscsi_target_node_set_redirect", 00:08:52.626 "iscsi_target_node_set_auth", 00:08:52.626 "iscsi_target_node_add_lun", 00:08:52.626 "iscsi_get_connections", 00:08:52.626 "iscsi_portal_group_set_auth", 00:08:52.626 "iscsi_start_portal_group", 00:08:52.626 "iscsi_delete_portal_group", 00:08:52.626 "iscsi_create_portal_group", 00:08:52.626 "iscsi_get_portal_groups", 00:08:52.626 "iscsi_delete_target_node", 00:08:52.626 "iscsi_target_node_remove_pg_ig_maps", 00:08:52.626 "iscsi_target_node_add_pg_ig_maps", 00:08:52.626 "iscsi_create_target_node", 00:08:52.626 "iscsi_get_target_nodes", 00:08:52.626 "iscsi_delete_initiator_group", 00:08:52.626 "iscsi_initiator_group_remove_initiators", 00:08:52.626 "iscsi_initiator_group_add_initiators", 00:08:52.626 "iscsi_create_initiator_group", 00:08:52.626 "iscsi_get_initiator_groups", 00:08:52.626 "nvmf_set_crdt", 00:08:52.626 "nvmf_set_config", 00:08:52.626 "nvmf_set_max_subsystems", 00:08:52.626 "nvmf_subsystem_get_listeners", 00:08:52.626 "nvmf_subsystem_get_qpairs", 00:08:52.626 "nvmf_subsystem_get_controllers", 00:08:52.626 "nvmf_get_stats", 00:08:52.626 "nvmf_get_transports", 00:08:52.626 "nvmf_create_transport", 00:08:52.626 "nvmf_get_targets", 00:08:52.626 "nvmf_delete_target", 00:08:52.626 "nvmf_create_target", 00:08:52.626 "nvmf_subsystem_allow_any_host", 00:08:52.626 "nvmf_subsystem_remove_host", 00:08:52.626 "nvmf_subsystem_add_host", 00:08:52.626 "nvmf_subsystem_remove_ns", 00:08:52.626 "nvmf_subsystem_add_ns", 00:08:52.626 "nvmf_subsystem_listener_set_ana_state", 00:08:52.626 "nvmf_discovery_get_referrals", 00:08:52.627 "nvmf_discovery_remove_referral", 00:08:52.627 "nvmf_discovery_add_referral", 00:08:52.627 "nvmf_subsystem_remove_listener", 00:08:52.627 "nvmf_subsystem_add_listener", 00:08:52.627 "nvmf_delete_subsystem", 00:08:52.627 "nvmf_create_subsystem", 00:08:52.627 "nvmf_get_subsystems", 00:08:52.627 "env_dpdk_get_mem_stats", 00:08:52.627 "nbd_get_disks", 00:08:52.627 "nbd_stop_disk", 00:08:52.627 "nbd_start_disk", 00:08:52.627 "ublk_recover_disk", 00:08:52.627 "ublk_get_disks", 00:08:52.627 "ublk_stop_disk", 00:08:52.627 "ublk_start_disk", 00:08:52.627 "ublk_destroy_target", 00:08:52.627 "ublk_create_target", 00:08:52.627 "virtio_blk_create_transport", 00:08:52.627 "virtio_blk_get_transports", 00:08:52.627 "vhost_controller_set_coalescing", 00:08:52.627 "vhost_get_controllers", 00:08:52.627 "vhost_delete_controller", 00:08:52.627 "vhost_create_blk_controller", 00:08:52.627 "vhost_scsi_controller_remove_target", 00:08:52.627 "vhost_scsi_controller_add_target", 00:08:52.627 "vhost_start_scsi_controller", 00:08:52.627 "vhost_create_scsi_controller", 00:08:52.627 "thread_set_cpumask", 00:08:52.627 "framework_get_scheduler", 00:08:52.627 "framework_set_scheduler", 00:08:52.627 "framework_get_reactors", 00:08:52.627 "thread_get_io_channels", 00:08:52.627 "thread_get_pollers", 00:08:52.627 "thread_get_stats", 00:08:52.627 "framework_monitor_context_switch", 00:08:52.627 "spdk_kill_instance", 00:08:52.627 "log_enable_timestamps", 00:08:52.627 "log_get_flags", 00:08:52.627 "log_clear_flag", 00:08:52.627 "log_set_flag", 00:08:52.627 "log_get_level", 00:08:52.627 "log_set_level", 00:08:52.627 "log_get_print_level", 00:08:52.627 "log_set_print_level", 00:08:52.627 "framework_enable_cpumask_locks", 00:08:52.627 "framework_disable_cpumask_locks", 00:08:52.627 "framework_wait_init", 00:08:52.627 "framework_start_init", 00:08:52.627 "scsi_get_devices", 00:08:52.627 "bdev_get_histogram", 00:08:52.627 "bdev_enable_histogram", 00:08:52.627 "bdev_set_qos_limit", 00:08:52.627 "bdev_set_qd_sampling_period", 00:08:52.627 "bdev_get_bdevs", 00:08:52.627 "bdev_reset_iostat", 00:08:52.627 "bdev_get_iostat", 00:08:52.627 "bdev_examine", 00:08:52.627 "bdev_wait_for_examine", 00:08:52.627 "bdev_set_options", 00:08:52.627 "notify_get_notifications", 00:08:52.627 "notify_get_types", 00:08:52.627 "accel_get_stats", 00:08:52.627 "accel_set_options", 00:08:52.627 "accel_set_driver", 00:08:52.627 "accel_crypto_key_destroy", 00:08:52.627 "accel_crypto_keys_get", 00:08:52.627 "accel_crypto_key_create", 00:08:52.627 "accel_assign_opc", 00:08:52.627 "accel_get_module_info", 00:08:52.627 "accel_get_opc_assignments", 00:08:52.627 "vmd_rescan", 00:08:52.627 "vmd_remove_device", 00:08:52.627 "vmd_enable", 00:08:52.627 "sock_set_default_impl", 00:08:52.627 "sock_impl_set_options", 00:08:52.627 "sock_impl_get_options", 00:08:52.627 "iobuf_get_stats", 00:08:52.627 "iobuf_set_options", 00:08:52.627 "framework_get_pci_devices", 00:08:52.627 "framework_get_config", 00:08:52.627 "framework_get_subsystems", 00:08:52.627 "trace_get_info", 00:08:52.627 "trace_get_tpoint_group_mask", 00:08:52.627 "trace_disable_tpoint_group", 00:08:52.627 "trace_enable_tpoint_group", 00:08:52.627 "trace_clear_tpoint_mask", 00:08:52.627 "trace_set_tpoint_mask", 00:08:52.627 "spdk_get_version", 00:08:52.627 "rpc_get_methods" 00:08:52.627 ] 00:08:52.627 10:04:05 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:52.627 10:04:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:52.627 10:04:05 -- common/autotest_common.sh@10 -- # set +x 00:08:52.627 10:04:05 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:52.627 10:04:05 -- spdkcli/tcp.sh@38 -- # killprocess 146990 00:08:52.627 10:04:05 -- common/autotest_common.sh@926 -- # '[' -z 146990 ']' 00:08:52.627 10:04:05 -- common/autotest_common.sh@930 -- # kill -0 146990 00:08:52.627 10:04:05 -- common/autotest_common.sh@931 -- # uname 00:08:52.627 10:04:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:52.627 10:04:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146990 00:08:52.627 10:04:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:52.627 10:04:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:52.627 10:04:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146990' 00:08:52.627 killing process with pid 146990 00:08:52.627 10:04:05 -- common/autotest_common.sh@945 -- # kill 146990 00:08:52.627 10:04:05 -- common/autotest_common.sh@950 -- # wait 146990 00:08:52.886 00:08:52.886 real 0m1.515s 00:08:52.886 user 0m2.819s 00:08:52.886 sys 0m0.417s 00:08:52.886 10:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.886 10:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.886 ************************************ 00:08:52.886 END TEST spdkcli_tcp 00:08:52.886 ************************************ 00:08:52.886 10:04:06 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:52.886 10:04:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:52.886 10:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.886 10:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.886 ************************************ 00:08:52.886 START TEST dpdk_mem_utility 00:08:52.886 ************************************ 00:08:52.886 10:04:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:53.144 * Looking for test storage... 00:08:53.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:53.144 10:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:53.144 10:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=147292 00:08:53.145 10:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 147292 00:08:53.145 10:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:53.145 10:04:06 -- common/autotest_common.sh@819 -- # '[' -z 147292 ']' 00:08:53.145 10:04:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.145 10:04:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:53.145 10:04:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.145 10:04:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:53.145 10:04:06 -- common/autotest_common.sh@10 -- # set +x 00:08:53.145 [2024-04-24 10:04:06.255479] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:53.145 [2024-04-24 10:04:06.255527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147292 ] 00:08:53.145 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.145 [2024-04-24 10:04:06.308991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.145 [2024-04-24 10:04:06.381583] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.145 [2024-04-24 10:04:06.381711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.081 10:04:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:54.081 10:04:07 -- common/autotest_common.sh@852 -- # return 0 00:08:54.081 10:04:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:54.081 10:04:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:54.081 10:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.081 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.081 { 00:08:54.081 "filename": "/tmp/spdk_mem_dump.txt" 00:08:54.081 } 00:08:54.081 10:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.081 10:04:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:54.081 DPDK memory size 814.000000 MiB in 1 heap(s) 00:08:54.081 1 heaps totaling size 814.000000 MiB 00:08:54.081 size: 814.000000 MiB heap id: 0 00:08:54.081 end heaps---------- 00:08:54.081 8 mempools totaling size 598.116089 MiB 00:08:54.081 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:54.081 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:54.081 size: 84.521057 MiB name: bdev_io_147292 00:08:54.081 size: 51.011292 MiB name: evtpool_147292 00:08:54.081 size: 50.003479 MiB name: msgpool_147292 00:08:54.081 size: 21.763794 MiB name: PDU_Pool 00:08:54.081 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:54.081 size: 0.026123 MiB name: Session_Pool 00:08:54.081 end mempools------- 00:08:54.081 6 memzones totaling size 4.142822 MiB 00:08:54.081 size: 1.000366 MiB name: RG_ring_0_147292 00:08:54.081 size: 1.000366 MiB name: RG_ring_1_147292 00:08:54.081 size: 1.000366 MiB name: RG_ring_4_147292 00:08:54.081 size: 1.000366 MiB name: RG_ring_5_147292 00:08:54.081 size: 0.125366 MiB name: RG_ring_2_147292 00:08:54.081 size: 0.015991 MiB name: RG_ring_3_147292 00:08:54.081 end memzones------- 00:08:54.081 10:04:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:54.081 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:08:54.081 list of free elements. size: 12.519348 MiB 00:08:54.081 element at address: 0x200000400000 with size: 1.999512 MiB 00:08:54.081 element at address: 0x200018e00000 with size: 0.999878 MiB 00:08:54.081 element at address: 0x200019000000 with size: 0.999878 MiB 00:08:54.081 element at address: 0x200003e00000 with size: 0.996277 MiB 00:08:54.081 element at address: 0x200031c00000 with size: 0.994446 MiB 00:08:54.081 element at address: 0x200013800000 with size: 0.978699 MiB 00:08:54.081 element at address: 0x200007000000 with size: 0.959839 MiB 00:08:54.081 element at address: 0x200019200000 with size: 0.936584 MiB 00:08:54.081 element at address: 0x200000200000 with size: 0.841614 MiB 00:08:54.081 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:08:54.081 element at address: 0x20000b200000 with size: 0.490723 MiB 00:08:54.081 element at address: 0x200000800000 with size: 0.487793 MiB 00:08:54.081 element at address: 0x200019400000 with size: 0.485657 MiB 00:08:54.081 element at address: 0x200027e00000 with size: 0.410034 MiB 00:08:54.081 element at address: 0x200003a00000 with size: 0.355530 MiB 00:08:54.081 list of standard malloc elements. size: 199.218079 MiB 00:08:54.081 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:08:54.081 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:08:54.081 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:54.081 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:08:54.081 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:54.081 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:54.081 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:08:54.081 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:54.081 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:08:54.081 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:54.081 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:08:54.081 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200003adb300 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200003adb500 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200003affa80 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200003affb40 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:08:54.081 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:08:54.081 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:08:54.081 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:08:54.081 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:08:54.081 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:08:54.081 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200027e69040 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:08:54.081 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:08:54.082 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:08:54.082 list of memzone associated elements. size: 602.262573 MiB 00:08:54.082 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:08:54.082 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:54.082 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:08:54.082 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:54.082 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:08:54.082 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_147292_0 00:08:54.082 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:08:54.082 associated memzone info: size: 48.002930 MiB name: MP_evtpool_147292_0 00:08:54.082 element at address: 0x200003fff380 with size: 48.003052 MiB 00:08:54.082 associated memzone info: size: 48.002930 MiB name: MP_msgpool_147292_0 00:08:54.082 element at address: 0x2000195be940 with size: 20.255554 MiB 00:08:54.082 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:54.082 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:08:54.082 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:54.082 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:08:54.082 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_147292 00:08:54.082 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:08:54.082 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_147292 00:08:54.082 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:54.082 associated memzone info: size: 1.007996 MiB name: MP_evtpool_147292 00:08:54.082 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:08:54.082 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:54.082 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:08:54.082 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:54.082 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:08:54.082 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:54.082 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:08:54.082 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:54.082 element at address: 0x200003eff180 with size: 1.000488 MiB 00:08:54.082 associated memzone info: size: 1.000366 MiB name: RG_ring_0_147292 00:08:54.082 element at address: 0x200003affc00 with size: 1.000488 MiB 00:08:54.082 associated memzone info: size: 1.000366 MiB name: RG_ring_1_147292 00:08:54.082 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:08:54.082 associated memzone info: size: 1.000366 MiB name: RG_ring_4_147292 00:08:54.082 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:08:54.082 associated memzone info: size: 1.000366 MiB name: RG_ring_5_147292 00:08:54.082 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:08:54.082 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_147292 00:08:54.082 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:08:54.082 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:54.082 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:08:54.082 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:54.082 element at address: 0x20001947c540 with size: 0.250488 MiB 00:08:54.082 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:54.082 element at address: 0x200003adf880 with size: 0.125488 MiB 00:08:54.082 associated memzone info: size: 0.125366 MiB name: RG_ring_2_147292 00:08:54.082 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:08:54.082 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:54.082 element at address: 0x200027e69100 with size: 0.023743 MiB 00:08:54.082 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:54.082 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:08:54.082 associated memzone info: size: 0.015991 MiB name: RG_ring_3_147292 00:08:54.082 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:08:54.082 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:54.082 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:08:54.082 associated memzone info: size: 0.000183 MiB name: MP_msgpool_147292 00:08:54.082 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:08:54.082 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_147292 00:08:54.082 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:08:54.082 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:54.082 10:04:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:54.082 10:04:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 147292 00:08:54.082 10:04:07 -- common/autotest_common.sh@926 -- # '[' -z 147292 ']' 00:08:54.082 10:04:07 -- common/autotest_common.sh@930 -- # kill -0 147292 00:08:54.082 10:04:07 -- common/autotest_common.sh@931 -- # uname 00:08:54.082 10:04:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:54.082 10:04:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 147292 00:08:54.082 10:04:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:54.082 10:04:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:54.082 10:04:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 147292' 00:08:54.082 killing process with pid 147292 00:08:54.082 10:04:07 -- common/autotest_common.sh@945 -- # kill 147292 00:08:54.082 10:04:07 -- common/autotest_common.sh@950 -- # wait 147292 00:08:54.341 00:08:54.341 real 0m1.408s 00:08:54.341 user 0m1.499s 00:08:54.341 sys 0m0.377s 00:08:54.341 10:04:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.341 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.341 ************************************ 00:08:54.341 END TEST dpdk_mem_utility 00:08:54.341 ************************************ 00:08:54.341 10:04:07 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:54.341 10:04:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:54.341 10:04:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.341 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.341 ************************************ 00:08:54.341 START TEST event 00:08:54.341 ************************************ 00:08:54.341 10:04:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:54.600 * Looking for test storage... 00:08:54.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:54.600 10:04:07 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:54.600 10:04:07 -- bdev/nbd_common.sh@6 -- # set -e 00:08:54.600 10:04:07 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:54.600 10:04:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:54.600 10:04:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.600 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.600 ************************************ 00:08:54.600 START TEST event_perf 00:08:54.600 ************************************ 00:08:54.600 10:04:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:54.600 Running I/O for 1 seconds...[2024-04-24 10:04:07.685332] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:54.600 [2024-04-24 10:04:07.685408] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147580 ] 00:08:54.600 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.600 [2024-04-24 10:04:07.744272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.600 [2024-04-24 10:04:07.816525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.600 [2024-04-24 10:04:07.816622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.600 [2024-04-24 10:04:07.816706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.600 [2024-04-24 10:04:07.816708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.977 Running I/O for 1 seconds... 00:08:55.977 lcore 0: 200328 00:08:55.977 lcore 1: 200329 00:08:55.977 lcore 2: 200330 00:08:55.977 lcore 3: 200329 00:08:55.977 done. 00:08:55.977 00:08:55.977 real 0m1.242s 00:08:55.977 user 0m4.158s 00:08:55.977 sys 0m0.080s 00:08:55.977 10:04:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.977 10:04:08 -- common/autotest_common.sh@10 -- # set +x 00:08:55.977 ************************************ 00:08:55.977 END TEST event_perf 00:08:55.977 ************************************ 00:08:55.977 10:04:08 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:55.977 10:04:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:55.977 10:04:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:55.977 10:04:08 -- common/autotest_common.sh@10 -- # set +x 00:08:55.977 ************************************ 00:08:55.977 START TEST event_reactor 00:08:55.977 ************************************ 00:08:55.977 10:04:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:55.977 [2024-04-24 10:04:08.961773] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:55.977 [2024-04-24 10:04:08.961850] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147837 ] 00:08:55.977 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.977 [2024-04-24 10:04:09.019872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.977 [2024-04-24 10:04:09.088745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.914 test_start 00:08:56.914 oneshot 00:08:56.914 tick 100 00:08:56.914 tick 100 00:08:56.914 tick 250 00:08:56.914 tick 100 00:08:56.914 tick 100 00:08:56.914 tick 250 00:08:56.914 tick 500 00:08:56.914 tick 100 00:08:56.914 tick 100 00:08:56.914 tick 100 00:08:56.914 tick 250 00:08:56.914 tick 100 00:08:56.914 tick 100 00:08:56.914 test_end 00:08:56.914 00:08:56.914 real 0m1.244s 00:08:56.914 user 0m1.165s 00:08:56.914 sys 0m0.075s 00:08:56.914 10:04:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.914 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:56.914 ************************************ 00:08:56.914 END TEST event_reactor 00:08:56.914 ************************************ 00:08:57.172 10:04:10 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:57.172 10:04:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:57.173 10:04:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.173 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.173 ************************************ 00:08:57.173 START TEST event_reactor_perf 00:08:57.173 ************************************ 00:08:57.173 10:04:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:57.173 [2024-04-24 10:04:10.243960] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:57.173 [2024-04-24 10:04:10.244038] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148087 ] 00:08:57.173 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.173 [2024-04-24 10:04:10.303233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.173 [2024-04-24 10:04:10.371833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.547 test_start 00:08:58.547 test_end 00:08:58.547 Performance: 481884 events per second 00:08:58.547 00:08:58.547 real 0m1.239s 00:08:58.547 user 0m1.166s 00:08:58.548 sys 0m0.069s 00:08:58.548 10:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.548 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.548 ************************************ 00:08:58.548 END TEST event_reactor_perf 00:08:58.548 ************************************ 00:08:58.548 10:04:11 -- event/event.sh@49 -- # uname -s 00:08:58.548 10:04:11 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:58.548 10:04:11 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:58.548 10:04:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.548 10:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.548 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.548 ************************************ 00:08:58.548 START TEST event_scheduler 00:08:58.548 ************************************ 00:08:58.548 10:04:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:58.548 * Looking for test storage... 00:08:58.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:58.548 10:04:11 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:58.548 10:04:11 -- scheduler/scheduler.sh@35 -- # scheduler_pid=148360 00:08:58.548 10:04:11 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:58.548 10:04:11 -- scheduler/scheduler.sh@37 -- # waitforlisten 148360 00:08:58.548 10:04:11 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:58.548 10:04:11 -- common/autotest_common.sh@819 -- # '[' -z 148360 ']' 00:08:58.548 10:04:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.548 10:04:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:58.548 10:04:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.548 10:04:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:58.548 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:08:58.548 [2024-04-24 10:04:11.615932] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:58.548 [2024-04-24 10:04:11.615985] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148360 ] 00:08:58.548 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.548 [2024-04-24 10:04:11.666857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.548 [2024-04-24 10:04:11.746282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.548 [2024-04-24 10:04:11.746364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.548 [2024-04-24 10:04:11.746453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.548 [2024-04-24 10:04:11.746455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.549 10:04:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:59.549 10:04:12 -- common/autotest_common.sh@852 -- # return 0 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 POWER: Env isn't set yet! 00:08:59.549 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:59.549 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:59.549 POWER: Cannot set governor of lcore 0 to userspace 00:08:59.549 POWER: Attempting to initialise PSTAT power management... 00:08:59.549 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:08:59.549 POWER: Initialized successfully for lcore 0 power management 00:08:59.549 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:08:59.549 POWER: Initialized successfully for lcore 1 power management 00:08:59.549 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:08:59.549 POWER: Initialized successfully for lcore 2 power management 00:08:59.549 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:08:59.549 POWER: Initialized successfully for lcore 3 power management 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 [2024-04-24 10:04:12.575943] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:59.549 10:04:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:59.549 10:04:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 ************************************ 00:08:59.549 START TEST scheduler_create_thread 00:08:59.549 ************************************ 00:08:59.549 10:04:12 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 2 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 3 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 4 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 5 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 6 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 7 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 8 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 9 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 10 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:59.549 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.549 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.549 10:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.549 10:04:12 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:59.550 10:04:12 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:59.550 10:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.550 10:04:12 -- common/autotest_common.sh@10 -- # set +x 00:09:00.485 10:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.485 10:04:13 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:00.485 10:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.485 10:04:13 -- common/autotest_common.sh@10 -- # set +x 00:09:01.862 10:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.862 10:04:15 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:01.862 10:04:15 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:01.862 10:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.862 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:09:02.799 10:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:02.799 00:09:02.799 real 0m3.380s 00:09:02.799 user 0m0.018s 00:09:02.799 sys 0m0.010s 00:09:02.799 10:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.799 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:09:02.799 ************************************ 00:09:02.799 END TEST scheduler_create_thread 00:09:02.799 ************************************ 00:09:02.799 10:04:15 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:02.799 10:04:15 -- scheduler/scheduler.sh@46 -- # killprocess 148360 00:09:02.799 10:04:15 -- common/autotest_common.sh@926 -- # '[' -z 148360 ']' 00:09:02.799 10:04:15 -- common/autotest_common.sh@930 -- # kill -0 148360 00:09:02.799 10:04:15 -- common/autotest_common.sh@931 -- # uname 00:09:02.799 10:04:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:02.799 10:04:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148360 00:09:02.799 10:04:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:02.799 10:04:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:02.799 10:04:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148360' 00:09:02.799 killing process with pid 148360 00:09:02.799 10:04:16 -- common/autotest_common.sh@945 -- # kill 148360 00:09:02.799 10:04:16 -- common/autotest_common.sh@950 -- # wait 148360 00:09:03.366 [2024-04-24 10:04:16.343908] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:03.366 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:09:03.366 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:09:03.366 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:09:03.366 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:09:03.366 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:09:03.366 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:09:03.366 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:09:03.366 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:09:03.366 00:09:03.366 real 0m5.094s 00:09:03.366 user 0m10.636s 00:09:03.366 sys 0m0.314s 00:09:03.366 10:04:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.366 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:03.366 ************************************ 00:09:03.366 END TEST event_scheduler 00:09:03.366 ************************************ 00:09:03.366 10:04:16 -- event/event.sh@51 -- # modprobe -n nbd 00:09:03.366 10:04:16 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:03.366 10:04:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.366 10:04:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.366 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:03.366 ************************************ 00:09:03.366 START TEST app_repeat 00:09:03.366 ************************************ 00:09:03.366 10:04:16 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:03.367 10:04:16 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.367 10:04:16 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.367 10:04:16 -- event/event.sh@13 -- # local nbd_list 00:09:03.367 10:04:16 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.367 10:04:16 -- event/event.sh@14 -- # local bdev_list 00:09:03.367 10:04:16 -- event/event.sh@15 -- # local repeat_times=4 00:09:03.367 10:04:16 -- event/event.sh@17 -- # modprobe nbd 00:09:03.625 10:04:16 -- event/event.sh@19 -- # repeat_pid=149342 00:09:03.625 10:04:16 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:03.625 10:04:16 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:03.625 10:04:16 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 149342' 00:09:03.625 Process app_repeat pid: 149342 00:09:03.625 10:04:16 -- event/event.sh@23 -- # for i in {0..2} 00:09:03.625 10:04:16 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:03.625 spdk_app_start Round 0 00:09:03.625 10:04:16 -- event/event.sh@25 -- # waitforlisten 149342 /var/tmp/spdk-nbd.sock 00:09:03.625 10:04:16 -- common/autotest_common.sh@819 -- # '[' -z 149342 ']' 00:09:03.625 10:04:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:03.625 10:04:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:03.625 10:04:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:03.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:03.625 10:04:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:03.625 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 [2024-04-24 10:04:16.672152] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:03.625 [2024-04-24 10:04:16.672212] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149342 ] 00:09:03.625 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.625 [2024-04-24 10:04:16.727229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:03.625 [2024-04-24 10:04:16.803884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.625 [2024-04-24 10:04:16.803886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.562 10:04:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:04.562 10:04:17 -- common/autotest_common.sh@852 -- # return 0 00:09:04.562 10:04:17 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:04.562 Malloc0 00:09:04.562 10:04:17 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:04.820 Malloc1 00:09:04.820 10:04:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@12 -- # local i 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:04.820 10:04:17 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:04.820 /dev/nbd0 00:09:04.820 10:04:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:04.820 10:04:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:04.820 10:04:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:04.820 10:04:18 -- common/autotest_common.sh@857 -- # local i 00:09:04.820 10:04:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:04.820 10:04:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:04.820 10:04:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:04.820 10:04:18 -- common/autotest_common.sh@861 -- # break 00:09:04.821 10:04:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:04.821 10:04:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:04.821 10:04:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:04.821 1+0 records in 00:09:04.821 1+0 records out 00:09:04.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192996 s, 21.2 MB/s 00:09:04.821 10:04:18 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:04.821 10:04:18 -- common/autotest_common.sh@874 -- # size=4096 00:09:04.821 10:04:18 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:04.821 10:04:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:04.821 10:04:18 -- common/autotest_common.sh@877 -- # return 0 00:09:04.821 10:04:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.821 10:04:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:04.821 10:04:18 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:05.080 /dev/nbd1 00:09:05.080 10:04:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:05.080 10:04:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:05.080 10:04:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:05.080 10:04:18 -- common/autotest_common.sh@857 -- # local i 00:09:05.080 10:04:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:05.080 10:04:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:05.080 10:04:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:05.080 10:04:18 -- common/autotest_common.sh@861 -- # break 00:09:05.080 10:04:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:05.080 10:04:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:05.080 10:04:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:05.080 1+0 records in 00:09:05.080 1+0 records out 00:09:05.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000128634 s, 31.8 MB/s 00:09:05.080 10:04:18 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:05.080 10:04:18 -- common/autotest_common.sh@874 -- # size=4096 00:09:05.080 10:04:18 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:05.080 10:04:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:05.080 10:04:18 -- common/autotest_common.sh@877 -- # return 0 00:09:05.080 10:04:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.080 10:04:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.080 10:04:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.080 10:04:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.080 10:04:18 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:05.339 { 00:09:05.339 "nbd_device": "/dev/nbd0", 00:09:05.339 "bdev_name": "Malloc0" 00:09:05.339 }, 00:09:05.339 { 00:09:05.339 "nbd_device": "/dev/nbd1", 00:09:05.339 "bdev_name": "Malloc1" 00:09:05.339 } 00:09:05.339 ]' 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:05.339 { 00:09:05.339 "nbd_device": "/dev/nbd0", 00:09:05.339 "bdev_name": "Malloc0" 00:09:05.339 }, 00:09:05.339 { 00:09:05.339 "nbd_device": "/dev/nbd1", 00:09:05.339 "bdev_name": "Malloc1" 00:09:05.339 } 00:09:05.339 ]' 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:05.339 /dev/nbd1' 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:05.339 /dev/nbd1' 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@65 -- # count=2 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@95 -- # count=2 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:05.339 10:04:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:05.340 256+0 records in 00:09:05.340 256+0 records out 00:09:05.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103192 s, 102 MB/s 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:05.340 256+0 records in 00:09:05.340 256+0 records out 00:09:05.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140091 s, 74.8 MB/s 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:05.340 256+0 records in 00:09:05.340 256+0 records out 00:09:05.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147481 s, 71.1 MB/s 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@51 -- # local i 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.340 10:04:18 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:05.598 10:04:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:05.598 10:04:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:05.598 10:04:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:05.599 10:04:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.599 10:04:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.599 10:04:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:05.599 10:04:18 -- bdev/nbd_common.sh@41 -- # break 00:09:05.599 10:04:18 -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.599 10:04:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.599 10:04:18 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@41 -- # break 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.858 10:04:18 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.858 10:04:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:05.858 10:04:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:05.858 10:04:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.117 10:04:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:06.117 10:04:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:06.117 10:04:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.117 10:04:19 -- bdev/nbd_common.sh@65 -- # true 00:09:06.117 10:04:19 -- bdev/nbd_common.sh@65 -- # count=0 00:09:06.117 10:04:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:06.117 10:04:19 -- bdev/nbd_common.sh@104 -- # count=0 00:09:06.117 10:04:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:06.117 10:04:19 -- bdev/nbd_common.sh@109 -- # return 0 00:09:06.117 10:04:19 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:06.117 10:04:19 -- event/event.sh@35 -- # sleep 3 00:09:06.376 [2024-04-24 10:04:19.573601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:06.376 [2024-04-24 10:04:19.637950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.376 [2024-04-24 10:04:19.637953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.635 [2024-04-24 10:04:19.679382] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:06.635 [2024-04-24 10:04:19.679421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:09.171 10:04:22 -- event/event.sh@23 -- # for i in {0..2} 00:09:09.171 10:04:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:09.171 spdk_app_start Round 1 00:09:09.171 10:04:22 -- event/event.sh@25 -- # waitforlisten 149342 /var/tmp/spdk-nbd.sock 00:09:09.171 10:04:22 -- common/autotest_common.sh@819 -- # '[' -z 149342 ']' 00:09:09.171 10:04:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:09.171 10:04:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:09.171 10:04:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:09.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:09.171 10:04:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:09.171 10:04:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.430 10:04:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:09.430 10:04:22 -- common/autotest_common.sh@852 -- # return 0 00:09:09.430 10:04:22 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:09.689 Malloc0 00:09:09.689 10:04:22 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:09.689 Malloc1 00:09:09.689 10:04:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@12 -- # local i 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:09.689 10:04:22 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:09.949 /dev/nbd0 00:09:09.949 10:04:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:09.949 10:04:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:09.949 10:04:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:09.949 10:04:23 -- common/autotest_common.sh@857 -- # local i 00:09:09.949 10:04:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:09.949 10:04:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:09.949 10:04:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:09.949 10:04:23 -- common/autotest_common.sh@861 -- # break 00:09:09.949 10:04:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:09.949 10:04:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:09.949 10:04:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:09.949 1+0 records in 00:09:09.949 1+0 records out 00:09:09.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203534 s, 20.1 MB/s 00:09:09.949 10:04:23 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:09.949 10:04:23 -- common/autotest_common.sh@874 -- # size=4096 00:09:09.949 10:04:23 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:09.949 10:04:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:09.949 10:04:23 -- common/autotest_common.sh@877 -- # return 0 00:09:09.949 10:04:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:09.949 10:04:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:09.949 10:04:23 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:10.208 /dev/nbd1 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:10.208 10:04:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:10.208 10:04:23 -- common/autotest_common.sh@857 -- # local i 00:09:10.208 10:04:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:10.208 10:04:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:10.208 10:04:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:10.208 10:04:23 -- common/autotest_common.sh@861 -- # break 00:09:10.208 10:04:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:10.208 10:04:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:10.208 10:04:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:10.208 1+0 records in 00:09:10.208 1+0 records out 00:09:10.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225167 s, 18.2 MB/s 00:09:10.208 10:04:23 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:10.208 10:04:23 -- common/autotest_common.sh@874 -- # size=4096 00:09:10.208 10:04:23 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:10.208 10:04:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:10.208 10:04:23 -- common/autotest_common.sh@877 -- # return 0 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:10.208 { 00:09:10.208 "nbd_device": "/dev/nbd0", 00:09:10.208 "bdev_name": "Malloc0" 00:09:10.208 }, 00:09:10.208 { 00:09:10.208 "nbd_device": "/dev/nbd1", 00:09:10.208 "bdev_name": "Malloc1" 00:09:10.208 } 00:09:10.208 ]' 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:10.208 { 00:09:10.208 "nbd_device": "/dev/nbd0", 00:09:10.208 "bdev_name": "Malloc0" 00:09:10.208 }, 00:09:10.208 { 00:09:10.208 "nbd_device": "/dev/nbd1", 00:09:10.208 "bdev_name": "Malloc1" 00:09:10.208 } 00:09:10.208 ]' 00:09:10.208 10:04:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:10.467 10:04:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:10.467 /dev/nbd1' 00:09:10.467 10:04:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:10.467 /dev/nbd1' 00:09:10.467 10:04:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:10.467 10:04:23 -- bdev/nbd_common.sh@65 -- # count=2 00:09:10.467 10:04:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:10.467 10:04:23 -- bdev/nbd_common.sh@95 -- # count=2 00:09:10.467 10:04:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:10.468 256+0 records in 00:09:10.468 256+0 records out 00:09:10.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102813 s, 102 MB/s 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:10.468 256+0 records in 00:09:10.468 256+0 records out 00:09:10.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137855 s, 76.1 MB/s 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:10.468 256+0 records in 00:09:10.468 256+0 records out 00:09:10.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142597 s, 73.5 MB/s 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@51 -- # local i 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.468 10:04:23 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@41 -- # break 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@41 -- # break 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.727 10:04:23 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@65 -- # true 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@65 -- # count=0 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@104 -- # count=0 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:10.987 10:04:24 -- bdev/nbd_common.sh@109 -- # return 0 00:09:10.987 10:04:24 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:11.246 10:04:24 -- event/event.sh@35 -- # sleep 3 00:09:11.506 [2024-04-24 10:04:24.573410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:11.506 [2024-04-24 10:04:24.638129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.506 [2024-04-24 10:04:24.638133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.506 [2024-04-24 10:04:24.679525] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:11.506 [2024-04-24 10:04:24.679566] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:14.795 10:04:27 -- event/event.sh@23 -- # for i in {0..2} 00:09:14.795 10:04:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:14.795 spdk_app_start Round 2 00:09:14.795 10:04:27 -- event/event.sh@25 -- # waitforlisten 149342 /var/tmp/spdk-nbd.sock 00:09:14.795 10:04:27 -- common/autotest_common.sh@819 -- # '[' -z 149342 ']' 00:09:14.795 10:04:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:14.795 10:04:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:14.795 10:04:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:14.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:14.795 10:04:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:14.795 10:04:27 -- common/autotest_common.sh@10 -- # set +x 00:09:14.795 10:04:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:14.795 10:04:27 -- common/autotest_common.sh@852 -- # return 0 00:09:14.795 10:04:27 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:14.795 Malloc0 00:09:14.795 10:04:27 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:14.795 Malloc1 00:09:14.795 10:04:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@12 -- # local i 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.795 10:04:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:14.795 /dev/nbd0 00:09:14.795 10:04:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:14.795 10:04:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:14.795 10:04:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:14.795 10:04:28 -- common/autotest_common.sh@857 -- # local i 00:09:14.795 10:04:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:14.795 10:04:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:14.795 10:04:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:15.054 10:04:28 -- common/autotest_common.sh@861 -- # break 00:09:15.054 10:04:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:15.054 10:04:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:15.054 10:04:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.054 1+0 records in 00:09:15.054 1+0 records out 00:09:15.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000112156 s, 36.5 MB/s 00:09:15.054 10:04:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.054 10:04:28 -- common/autotest_common.sh@874 -- # size=4096 00:09:15.054 10:04:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.054 10:04:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:15.054 10:04:28 -- common/autotest_common.sh@877 -- # return 0 00:09:15.054 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.054 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.054 10:04:28 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:15.055 /dev/nbd1 00:09:15.055 10:04:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:15.055 10:04:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:15.055 10:04:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:15.055 10:04:28 -- common/autotest_common.sh@857 -- # local i 00:09:15.055 10:04:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:15.055 10:04:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:15.055 10:04:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:15.055 10:04:28 -- common/autotest_common.sh@861 -- # break 00:09:15.055 10:04:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:15.055 10:04:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:15.055 10:04:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.055 1+0 records in 00:09:15.055 1+0 records out 00:09:15.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214252 s, 19.1 MB/s 00:09:15.055 10:04:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.055 10:04:28 -- common/autotest_common.sh@874 -- # size=4096 00:09:15.055 10:04:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.055 10:04:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:15.055 10:04:28 -- common/autotest_common.sh@877 -- # return 0 00:09:15.055 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.055 10:04:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.055 10:04:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.055 10:04:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.055 10:04:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.314 10:04:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:15.314 { 00:09:15.314 "nbd_device": "/dev/nbd0", 00:09:15.314 "bdev_name": "Malloc0" 00:09:15.314 }, 00:09:15.314 { 00:09:15.314 "nbd_device": "/dev/nbd1", 00:09:15.314 "bdev_name": "Malloc1" 00:09:15.314 } 00:09:15.314 ]' 00:09:15.314 10:04:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:15.314 { 00:09:15.314 "nbd_device": "/dev/nbd0", 00:09:15.314 "bdev_name": "Malloc0" 00:09:15.314 }, 00:09:15.314 { 00:09:15.314 "nbd_device": "/dev/nbd1", 00:09:15.314 "bdev_name": "Malloc1" 00:09:15.314 } 00:09:15.314 ]' 00:09:15.314 10:04:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.314 10:04:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:15.314 /dev/nbd1' 00:09:15.314 10:04:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:15.314 /dev/nbd1' 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@65 -- # count=2 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@95 -- # count=2 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:15.315 256+0 records in 00:09:15.315 256+0 records out 00:09:15.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103395 s, 101 MB/s 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:15.315 256+0 records in 00:09:15.315 256+0 records out 00:09:15.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135863 s, 77.2 MB/s 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:15.315 256+0 records in 00:09:15.315 256+0 records out 00:09:15.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144468 s, 72.6 MB/s 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@51 -- # local i 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.315 10:04:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@41 -- # break 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.575 10:04:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@41 -- # break 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.834 10:04:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@65 -- # true 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@65 -- # count=0 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@104 -- # count=0 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:16.094 10:04:29 -- bdev/nbd_common.sh@109 -- # return 0 00:09:16.094 10:04:29 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:16.094 10:04:29 -- event/event.sh@35 -- # sleep 3 00:09:16.368 [2024-04-24 10:04:29.582814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.368 [2024-04-24 10:04:29.648533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.368 [2024-04-24 10:04:29.648536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.628 [2024-04-24 10:04:29.689803] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:16.628 [2024-04-24 10:04:29.689844] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:19.165 10:04:32 -- event/event.sh@38 -- # waitforlisten 149342 /var/tmp/spdk-nbd.sock 00:09:19.165 10:04:32 -- common/autotest_common.sh@819 -- # '[' -z 149342 ']' 00:09:19.165 10:04:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:19.165 10:04:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:19.165 10:04:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:19.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:19.165 10:04:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:19.165 10:04:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.424 10:04:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:19.424 10:04:32 -- common/autotest_common.sh@852 -- # return 0 00:09:19.424 10:04:32 -- event/event.sh@39 -- # killprocess 149342 00:09:19.424 10:04:32 -- common/autotest_common.sh@926 -- # '[' -z 149342 ']' 00:09:19.424 10:04:32 -- common/autotest_common.sh@930 -- # kill -0 149342 00:09:19.424 10:04:32 -- common/autotest_common.sh@931 -- # uname 00:09:19.424 10:04:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:19.424 10:04:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149342 00:09:19.424 10:04:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:19.424 10:04:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:19.424 10:04:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149342' 00:09:19.424 killing process with pid 149342 00:09:19.424 10:04:32 -- common/autotest_common.sh@945 -- # kill 149342 00:09:19.424 10:04:32 -- common/autotest_common.sh@950 -- # wait 149342 00:09:19.697 spdk_app_start is called in Round 0. 00:09:19.697 Shutdown signal received, stop current app iteration 00:09:19.697 Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 reinitialization... 00:09:19.697 spdk_app_start is called in Round 1. 00:09:19.697 Shutdown signal received, stop current app iteration 00:09:19.697 Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 reinitialization... 00:09:19.697 spdk_app_start is called in Round 2. 00:09:19.697 Shutdown signal received, stop current app iteration 00:09:19.697 Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 reinitialization... 00:09:19.697 spdk_app_start is called in Round 3. 00:09:19.697 Shutdown signal received, stop current app iteration 00:09:19.697 10:04:32 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:19.697 10:04:32 -- event/event.sh@42 -- # return 0 00:09:19.697 00:09:19.697 real 0m16.133s 00:09:19.697 user 0m34.803s 00:09:19.697 sys 0m2.298s 00:09:19.697 10:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.697 10:04:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.697 ************************************ 00:09:19.697 END TEST app_repeat 00:09:19.697 ************************************ 00:09:19.697 10:04:32 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:19.697 10:04:32 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:19.697 10:04:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:19.697 10:04:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:19.697 10:04:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.697 ************************************ 00:09:19.697 START TEST cpu_locks 00:09:19.697 ************************************ 00:09:19.697 10:04:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:19.697 * Looking for test storage... 00:09:19.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:19.697 10:04:32 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:19.697 10:04:32 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:19.697 10:04:32 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:19.697 10:04:32 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:19.697 10:04:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:19.697 10:04:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:19.697 10:04:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.697 ************************************ 00:09:19.697 START TEST default_locks 00:09:19.697 ************************************ 00:09:19.697 10:04:32 -- common/autotest_common.sh@1104 -- # default_locks 00:09:19.697 10:04:32 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=152331 00:09:19.697 10:04:32 -- event/cpu_locks.sh@47 -- # waitforlisten 152331 00:09:19.697 10:04:32 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:19.697 10:04:32 -- common/autotest_common.sh@819 -- # '[' -z 152331 ']' 00:09:19.697 10:04:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.697 10:04:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:19.697 10:04:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.697 10:04:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:19.697 10:04:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.697 [2024-04-24 10:04:32.941570] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:19.697 [2024-04-24 10:04:32.941628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152331 ] 00:09:19.697 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.970 [2024-04-24 10:04:32.994798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.970 [2024-04-24 10:04:33.072495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:19.970 [2024-04-24 10:04:33.072611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.538 10:04:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:20.538 10:04:33 -- common/autotest_common.sh@852 -- # return 0 00:09:20.538 10:04:33 -- event/cpu_locks.sh@49 -- # locks_exist 152331 00:09:20.538 10:04:33 -- event/cpu_locks.sh@22 -- # lslocks -p 152331 00:09:20.538 10:04:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:20.798 lslocks: write error 00:09:20.798 10:04:33 -- event/cpu_locks.sh@50 -- # killprocess 152331 00:09:20.798 10:04:33 -- common/autotest_common.sh@926 -- # '[' -z 152331 ']' 00:09:20.798 10:04:33 -- common/autotest_common.sh@930 -- # kill -0 152331 00:09:20.798 10:04:33 -- common/autotest_common.sh@931 -- # uname 00:09:20.798 10:04:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:20.798 10:04:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152331 00:09:20.798 10:04:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:20.798 10:04:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:20.798 10:04:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152331' 00:09:20.798 killing process with pid 152331 00:09:20.798 10:04:33 -- common/autotest_common.sh@945 -- # kill 152331 00:09:20.798 10:04:33 -- common/autotest_common.sh@950 -- # wait 152331 00:09:21.057 10:04:34 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 152331 00:09:21.057 10:04:34 -- common/autotest_common.sh@640 -- # local es=0 00:09:21.057 10:04:34 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 152331 00:09:21.057 10:04:34 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:21.057 10:04:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:21.057 10:04:34 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:21.057 10:04:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:21.057 10:04:34 -- common/autotest_common.sh@643 -- # waitforlisten 152331 00:09:21.057 10:04:34 -- common/autotest_common.sh@819 -- # '[' -z 152331 ']' 00:09:21.057 10:04:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.057 10:04:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:21.057 10:04:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.057 10:04:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:21.057 10:04:34 -- common/autotest_common.sh@10 -- # set +x 00:09:21.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (152331) - No such process 00:09:21.057 ERROR: process (pid: 152331) is no longer running 00:09:21.057 10:04:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:21.057 10:04:34 -- common/autotest_common.sh@852 -- # return 1 00:09:21.057 10:04:34 -- common/autotest_common.sh@643 -- # es=1 00:09:21.057 10:04:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:21.057 10:04:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:21.057 10:04:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:21.057 10:04:34 -- event/cpu_locks.sh@54 -- # no_locks 00:09:21.057 10:04:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:21.057 10:04:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:21.057 10:04:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:21.057 00:09:21.057 real 0m1.392s 00:09:21.057 user 0m1.461s 00:09:21.057 sys 0m0.411s 00:09:21.057 10:04:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.057 10:04:34 -- common/autotest_common.sh@10 -- # set +x 00:09:21.057 ************************************ 00:09:21.057 END TEST default_locks 00:09:21.057 ************************************ 00:09:21.057 10:04:34 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:21.057 10:04:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:21.057 10:04:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.057 10:04:34 -- common/autotest_common.sh@10 -- # set +x 00:09:21.057 ************************************ 00:09:21.057 START TEST default_locks_via_rpc 00:09:21.057 ************************************ 00:09:21.057 10:04:34 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:21.057 10:04:34 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=152634 00:09:21.057 10:04:34 -- event/cpu_locks.sh@63 -- # waitforlisten 152634 00:09:21.057 10:04:34 -- common/autotest_common.sh@819 -- # '[' -z 152634 ']' 00:09:21.057 10:04:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.057 10:04:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:21.057 10:04:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.057 10:04:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:21.057 10:04:34 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:21.057 10:04:34 -- common/autotest_common.sh@10 -- # set +x 00:09:21.317 [2024-04-24 10:04:34.369322] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:21.317 [2024-04-24 10:04:34.369374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152634 ] 00:09:21.317 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.317 [2024-04-24 10:04:34.423703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.317 [2024-04-24 10:04:34.501149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:21.317 [2024-04-24 10:04:34.501262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.884 10:04:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:21.884 10:04:35 -- common/autotest_common.sh@852 -- # return 0 00:09:21.884 10:04:35 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:21.884 10:04:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:21.884 10:04:35 -- common/autotest_common.sh@10 -- # set +x 00:09:21.884 10:04:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:21.884 10:04:35 -- event/cpu_locks.sh@67 -- # no_locks 00:09:21.884 10:04:35 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:21.884 10:04:35 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:21.884 10:04:35 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:21.884 10:04:35 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:21.884 10:04:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:21.884 10:04:35 -- common/autotest_common.sh@10 -- # set +x 00:09:21.884 10:04:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:21.884 10:04:35 -- event/cpu_locks.sh@71 -- # locks_exist 152634 00:09:21.884 10:04:35 -- event/cpu_locks.sh@22 -- # lslocks -p 152634 00:09:21.884 10:04:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:22.143 10:04:35 -- event/cpu_locks.sh@73 -- # killprocess 152634 00:09:22.144 10:04:35 -- common/autotest_common.sh@926 -- # '[' -z 152634 ']' 00:09:22.144 10:04:35 -- common/autotest_common.sh@930 -- # kill -0 152634 00:09:22.144 10:04:35 -- common/autotest_common.sh@931 -- # uname 00:09:22.144 10:04:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:22.144 10:04:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152634 00:09:22.144 10:04:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:22.144 10:04:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:22.144 10:04:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152634' 00:09:22.144 killing process with pid 152634 00:09:22.144 10:04:35 -- common/autotest_common.sh@945 -- # kill 152634 00:09:22.144 10:04:35 -- common/autotest_common.sh@950 -- # wait 152634 00:09:22.712 00:09:22.712 real 0m1.365s 00:09:22.712 user 0m1.427s 00:09:22.712 sys 0m0.391s 00:09:22.712 10:04:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.712 10:04:35 -- common/autotest_common.sh@10 -- # set +x 00:09:22.712 ************************************ 00:09:22.712 END TEST default_locks_via_rpc 00:09:22.712 ************************************ 00:09:22.712 10:04:35 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:22.712 10:04:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:22.712 10:04:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.712 10:04:35 -- common/autotest_common.sh@10 -- # set +x 00:09:22.712 ************************************ 00:09:22.712 START TEST non_locking_app_on_locked_coremask 00:09:22.712 ************************************ 00:09:22.712 10:04:35 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:22.712 10:04:35 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=152898 00:09:22.712 10:04:35 -- event/cpu_locks.sh@81 -- # waitforlisten 152898 /var/tmp/spdk.sock 00:09:22.712 10:04:35 -- common/autotest_common.sh@819 -- # '[' -z 152898 ']' 00:09:22.712 10:04:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.712 10:04:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.712 10:04:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.712 10:04:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.712 10:04:35 -- common/autotest_common.sh@10 -- # set +x 00:09:22.712 10:04:35 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.712 [2024-04-24 10:04:35.764622] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:22.712 [2024-04-24 10:04:35.764670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152898 ] 00:09:22.712 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.712 [2024-04-24 10:04:35.817924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.712 [2024-04-24 10:04:35.894937] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:22.712 [2024-04-24 10:04:35.895052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.280 10:04:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:23.280 10:04:36 -- common/autotest_common.sh@852 -- # return 0 00:09:23.280 10:04:36 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=152914 00:09:23.280 10:04:36 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:23.280 10:04:36 -- event/cpu_locks.sh@85 -- # waitforlisten 152914 /var/tmp/spdk2.sock 00:09:23.280 10:04:36 -- common/autotest_common.sh@819 -- # '[' -z 152914 ']' 00:09:23.281 10:04:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:23.281 10:04:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:23.281 10:04:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:23.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:23.281 10:04:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:23.281 10:04:36 -- common/autotest_common.sh@10 -- # set +x 00:09:23.540 [2024-04-24 10:04:36.569231] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:23.540 [2024-04-24 10:04:36.569276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152914 ] 00:09:23.540 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.540 [2024-04-24 10:04:36.642133] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:23.540 [2024-04-24 10:04:36.642154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.540 [2024-04-24 10:04:36.780202] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:23.540 [2024-04-24 10:04:36.780313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.108 10:04:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:24.108 10:04:37 -- common/autotest_common.sh@852 -- # return 0 00:09:24.108 10:04:37 -- event/cpu_locks.sh@87 -- # locks_exist 152898 00:09:24.108 10:04:37 -- event/cpu_locks.sh@22 -- # lslocks -p 152898 00:09:24.108 10:04:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:24.677 lslocks: write error 00:09:24.677 10:04:37 -- event/cpu_locks.sh@89 -- # killprocess 152898 00:09:24.677 10:04:37 -- common/autotest_common.sh@926 -- # '[' -z 152898 ']' 00:09:24.677 10:04:37 -- common/autotest_common.sh@930 -- # kill -0 152898 00:09:24.677 10:04:37 -- common/autotest_common.sh@931 -- # uname 00:09:24.677 10:04:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:24.677 10:04:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152898 00:09:24.677 10:04:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:24.677 10:04:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:24.677 10:04:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152898' 00:09:24.677 killing process with pid 152898 00:09:24.677 10:04:37 -- common/autotest_common.sh@945 -- # kill 152898 00:09:24.677 10:04:37 -- common/autotest_common.sh@950 -- # wait 152898 00:09:25.246 10:04:38 -- event/cpu_locks.sh@90 -- # killprocess 152914 00:09:25.246 10:04:38 -- common/autotest_common.sh@926 -- # '[' -z 152914 ']' 00:09:25.246 10:04:38 -- common/autotest_common.sh@930 -- # kill -0 152914 00:09:25.246 10:04:38 -- common/autotest_common.sh@931 -- # uname 00:09:25.246 10:04:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:25.246 10:04:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152914 00:09:25.505 10:04:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:25.505 10:04:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:25.505 10:04:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152914' 00:09:25.505 killing process with pid 152914 00:09:25.505 10:04:38 -- common/autotest_common.sh@945 -- # kill 152914 00:09:25.505 10:04:38 -- common/autotest_common.sh@950 -- # wait 152914 00:09:25.763 00:09:25.763 real 0m3.176s 00:09:25.763 user 0m3.378s 00:09:25.763 sys 0m0.856s 00:09:25.763 10:04:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.763 10:04:38 -- common/autotest_common.sh@10 -- # set +x 00:09:25.763 ************************************ 00:09:25.763 END TEST non_locking_app_on_locked_coremask 00:09:25.763 ************************************ 00:09:25.763 10:04:38 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:25.763 10:04:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:25.763 10:04:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:25.763 10:04:38 -- common/autotest_common.sh@10 -- # set +x 00:09:25.763 ************************************ 00:09:25.763 START TEST locking_app_on_unlocked_coremask 00:09:25.763 ************************************ 00:09:25.763 10:04:38 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:25.763 10:04:38 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=153411 00:09:25.763 10:04:38 -- event/cpu_locks.sh@99 -- # waitforlisten 153411 /var/tmp/spdk.sock 00:09:25.763 10:04:38 -- common/autotest_common.sh@819 -- # '[' -z 153411 ']' 00:09:25.763 10:04:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.763 10:04:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:25.763 10:04:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.763 10:04:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:25.763 10:04:38 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:25.763 10:04:38 -- common/autotest_common.sh@10 -- # set +x 00:09:25.763 [2024-04-24 10:04:38.976203] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:25.763 [2024-04-24 10:04:38.976252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153411 ] 00:09:25.763 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.763 [2024-04-24 10:04:39.029577] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:25.763 [2024-04-24 10:04:39.029602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.021 [2024-04-24 10:04:39.108410] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:26.021 [2024-04-24 10:04:39.108522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.589 10:04:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:26.589 10:04:39 -- common/autotest_common.sh@852 -- # return 0 00:09:26.589 10:04:39 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=153600 00:09:26.589 10:04:39 -- event/cpu_locks.sh@103 -- # waitforlisten 153600 /var/tmp/spdk2.sock 00:09:26.589 10:04:39 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:26.589 10:04:39 -- common/autotest_common.sh@819 -- # '[' -z 153600 ']' 00:09:26.589 10:04:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:26.589 10:04:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:26.589 10:04:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:26.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:26.589 10:04:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:26.589 10:04:39 -- common/autotest_common.sh@10 -- # set +x 00:09:26.589 [2024-04-24 10:04:39.781651] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:26.589 [2024-04-24 10:04:39.781701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153600 ] 00:09:26.589 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.589 [2024-04-24 10:04:39.858898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.849 [2024-04-24 10:04:40.012124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:26.849 [2024-04-24 10:04:40.012242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.495 10:04:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:27.495 10:04:40 -- common/autotest_common.sh@852 -- # return 0 00:09:27.495 10:04:40 -- event/cpu_locks.sh@105 -- # locks_exist 153600 00:09:27.495 10:04:40 -- event/cpu_locks.sh@22 -- # lslocks -p 153600 00:09:27.495 10:04:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:27.755 lslocks: write error 00:09:27.755 10:04:40 -- event/cpu_locks.sh@107 -- # killprocess 153411 00:09:27.755 10:04:40 -- common/autotest_common.sh@926 -- # '[' -z 153411 ']' 00:09:27.755 10:04:40 -- common/autotest_common.sh@930 -- # kill -0 153411 00:09:27.755 10:04:40 -- common/autotest_common.sh@931 -- # uname 00:09:27.755 10:04:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:27.755 10:04:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 153411 00:09:27.755 10:04:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:27.755 10:04:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:27.755 10:04:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 153411' 00:09:27.755 killing process with pid 153411 00:09:27.755 10:04:40 -- common/autotest_common.sh@945 -- # kill 153411 00:09:27.755 10:04:40 -- common/autotest_common.sh@950 -- # wait 153411 00:09:28.325 10:04:41 -- event/cpu_locks.sh@108 -- # killprocess 153600 00:09:28.325 10:04:41 -- common/autotest_common.sh@926 -- # '[' -z 153600 ']' 00:09:28.325 10:04:41 -- common/autotest_common.sh@930 -- # kill -0 153600 00:09:28.325 10:04:41 -- common/autotest_common.sh@931 -- # uname 00:09:28.325 10:04:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:28.325 10:04:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 153600 00:09:28.325 10:04:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:28.325 10:04:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:28.325 10:04:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 153600' 00:09:28.325 killing process with pid 153600 00:09:28.325 10:04:41 -- common/autotest_common.sh@945 -- # kill 153600 00:09:28.325 10:04:41 -- common/autotest_common.sh@950 -- # wait 153600 00:09:28.894 00:09:28.894 real 0m2.974s 00:09:28.894 user 0m3.174s 00:09:28.894 sys 0m0.759s 00:09:28.894 10:04:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.894 10:04:41 -- common/autotest_common.sh@10 -- # set +x 00:09:28.894 ************************************ 00:09:28.894 END TEST locking_app_on_unlocked_coremask 00:09:28.894 ************************************ 00:09:28.894 10:04:41 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:28.894 10:04:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:28.894 10:04:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:28.894 10:04:41 -- common/autotest_common.sh@10 -- # set +x 00:09:28.894 ************************************ 00:09:28.894 START TEST locking_app_on_locked_coremask 00:09:28.894 ************************************ 00:09:28.894 10:04:41 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:09:28.894 10:04:41 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=153919 00:09:28.894 10:04:41 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:28.894 10:04:41 -- event/cpu_locks.sh@116 -- # waitforlisten 153919 /var/tmp/spdk.sock 00:09:28.894 10:04:41 -- common/autotest_common.sh@819 -- # '[' -z 153919 ']' 00:09:28.894 10:04:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.894 10:04:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:28.894 10:04:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.894 10:04:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:28.894 10:04:41 -- common/autotest_common.sh@10 -- # set +x 00:09:28.894 [2024-04-24 10:04:41.988738] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:28.894 [2024-04-24 10:04:41.988799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153919 ] 00:09:28.894 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.894 [2024-04-24 10:04:42.043636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.894 [2024-04-24 10:04:42.115288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:28.894 [2024-04-24 10:04:42.115414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.831 10:04:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:29.831 10:04:42 -- common/autotest_common.sh@852 -- # return 0 00:09:29.831 10:04:42 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=154155 00:09:29.831 10:04:42 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 154155 /var/tmp/spdk2.sock 00:09:29.831 10:04:42 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:29.831 10:04:42 -- common/autotest_common.sh@640 -- # local es=0 00:09:29.831 10:04:42 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 154155 /var/tmp/spdk2.sock 00:09:29.831 10:04:42 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:29.831 10:04:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:29.831 10:04:42 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:29.831 10:04:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:29.831 10:04:42 -- common/autotest_common.sh@643 -- # waitforlisten 154155 /var/tmp/spdk2.sock 00:09:29.831 10:04:42 -- common/autotest_common.sh@819 -- # '[' -z 154155 ']' 00:09:29.831 10:04:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:29.831 10:04:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:29.831 10:04:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:29.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:29.831 10:04:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:29.831 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:09:29.831 [2024-04-24 10:04:42.834951] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:29.831 [2024-04-24 10:04:42.834997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154155 ] 00:09:29.831 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.831 [2024-04-24 10:04:42.907386] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 153919 has claimed it. 00:09:29.831 [2024-04-24 10:04:42.907426] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:30.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (154155) - No such process 00:09:30.399 ERROR: process (pid: 154155) is no longer running 00:09:30.399 10:04:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:30.399 10:04:43 -- common/autotest_common.sh@852 -- # return 1 00:09:30.399 10:04:43 -- common/autotest_common.sh@643 -- # es=1 00:09:30.399 10:04:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:30.399 10:04:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:30.399 10:04:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:30.399 10:04:43 -- event/cpu_locks.sh@122 -- # locks_exist 153919 00:09:30.399 10:04:43 -- event/cpu_locks.sh@22 -- # lslocks -p 153919 00:09:30.399 10:04:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:30.658 lslocks: write error 00:09:30.659 10:04:43 -- event/cpu_locks.sh@124 -- # killprocess 153919 00:09:30.659 10:04:43 -- common/autotest_common.sh@926 -- # '[' -z 153919 ']' 00:09:30.659 10:04:43 -- common/autotest_common.sh@930 -- # kill -0 153919 00:09:30.659 10:04:43 -- common/autotest_common.sh@931 -- # uname 00:09:30.659 10:04:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:30.659 10:04:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 153919 00:09:30.659 10:04:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:30.659 10:04:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:30.659 10:04:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 153919' 00:09:30.659 killing process with pid 153919 00:09:30.659 10:04:43 -- common/autotest_common.sh@945 -- # kill 153919 00:09:30.659 10:04:43 -- common/autotest_common.sh@950 -- # wait 153919 00:09:31.226 00:09:31.226 real 0m2.283s 00:09:31.226 user 0m2.511s 00:09:31.226 sys 0m0.610s 00:09:31.226 10:04:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.226 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:09:31.226 ************************************ 00:09:31.226 END TEST locking_app_on_locked_coremask 00:09:31.226 ************************************ 00:09:31.226 10:04:44 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:31.226 10:04:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:31.226 10:04:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:31.226 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:09:31.226 ************************************ 00:09:31.226 START TEST locking_overlapped_coremask 00:09:31.226 ************************************ 00:09:31.226 10:04:44 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:09:31.226 10:04:44 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=154416 00:09:31.226 10:04:44 -- event/cpu_locks.sh@133 -- # waitforlisten 154416 /var/tmp/spdk.sock 00:09:31.226 10:04:44 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:31.226 10:04:44 -- common/autotest_common.sh@819 -- # '[' -z 154416 ']' 00:09:31.226 10:04:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.226 10:04:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:31.226 10:04:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.226 10:04:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:31.226 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:09:31.226 [2024-04-24 10:04:44.312274] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:31.226 [2024-04-24 10:04:44.312326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154416 ] 00:09:31.226 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.226 [2024-04-24 10:04:44.366225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.226 [2024-04-24 10:04:44.445374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:31.226 [2024-04-24 10:04:44.445507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.226 [2024-04-24 10:04:44.445600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.226 [2024-04-24 10:04:44.445601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.162 10:04:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:32.162 10:04:45 -- common/autotest_common.sh@852 -- # return 0 00:09:32.162 10:04:45 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=154545 00:09:32.162 10:04:45 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 154545 /var/tmp/spdk2.sock 00:09:32.162 10:04:45 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:32.162 10:04:45 -- common/autotest_common.sh@640 -- # local es=0 00:09:32.162 10:04:45 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 154545 /var/tmp/spdk2.sock 00:09:32.162 10:04:45 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:32.162 10:04:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:32.162 10:04:45 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:32.162 10:04:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:32.162 10:04:45 -- common/autotest_common.sh@643 -- # waitforlisten 154545 /var/tmp/spdk2.sock 00:09:32.162 10:04:45 -- common/autotest_common.sh@819 -- # '[' -z 154545 ']' 00:09:32.162 10:04:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:32.162 10:04:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:32.162 10:04:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:32.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:32.162 10:04:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:32.162 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:32.162 [2024-04-24 10:04:45.170448] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:32.162 [2024-04-24 10:04:45.170495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154545 ] 00:09:32.162 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.162 [2024-04-24 10:04:45.246300] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 154416 has claimed it. 00:09:32.162 [2024-04-24 10:04:45.246351] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:32.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (154545) - No such process 00:09:32.728 ERROR: process (pid: 154545) is no longer running 00:09:32.728 10:04:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:32.728 10:04:45 -- common/autotest_common.sh@852 -- # return 1 00:09:32.728 10:04:45 -- common/autotest_common.sh@643 -- # es=1 00:09:32.728 10:04:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:32.728 10:04:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:32.728 10:04:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:32.728 10:04:45 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:32.728 10:04:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:32.728 10:04:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:32.728 10:04:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:32.728 10:04:45 -- event/cpu_locks.sh@141 -- # killprocess 154416 00:09:32.728 10:04:45 -- common/autotest_common.sh@926 -- # '[' -z 154416 ']' 00:09:32.728 10:04:45 -- common/autotest_common.sh@930 -- # kill -0 154416 00:09:32.728 10:04:45 -- common/autotest_common.sh@931 -- # uname 00:09:32.728 10:04:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:32.728 10:04:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154416 00:09:32.728 10:04:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:32.728 10:04:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:32.728 10:04:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154416' 00:09:32.728 killing process with pid 154416 00:09:32.728 10:04:45 -- common/autotest_common.sh@945 -- # kill 154416 00:09:32.728 10:04:45 -- common/autotest_common.sh@950 -- # wait 154416 00:09:32.988 00:09:32.988 real 0m1.921s 00:09:32.988 user 0m5.417s 00:09:32.988 sys 0m0.391s 00:09:32.988 10:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.988 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:32.988 ************************************ 00:09:32.988 END TEST locking_overlapped_coremask 00:09:32.988 ************************************ 00:09:32.988 10:04:46 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:32.988 10:04:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:32.988 10:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.988 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:32.988 ************************************ 00:09:32.988 START TEST locking_overlapped_coremask_via_rpc 00:09:32.988 ************************************ 00:09:32.988 10:04:46 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:09:32.988 10:04:46 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=154694 00:09:32.988 10:04:46 -- event/cpu_locks.sh@149 -- # waitforlisten 154694 /var/tmp/spdk.sock 00:09:32.988 10:04:46 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:32.988 10:04:46 -- common/autotest_common.sh@819 -- # '[' -z 154694 ']' 00:09:32.988 10:04:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.988 10:04:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:32.988 10:04:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.988 10:04:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:32.988 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:33.247 [2024-04-24 10:04:46.272726] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:33.247 [2024-04-24 10:04:46.272776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154694 ] 00:09:33.247 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.247 [2024-04-24 10:04:46.328550] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:33.247 [2024-04-24 10:04:46.328581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:33.247 [2024-04-24 10:04:46.402820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:33.247 [2024-04-24 10:04:46.402976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.247 [2024-04-24 10:04:46.403079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.247 [2024-04-24 10:04:46.403082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.813 10:04:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:33.813 10:04:47 -- common/autotest_common.sh@852 -- # return 0 00:09:33.813 10:04:47 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=154930 00:09:33.813 10:04:47 -- event/cpu_locks.sh@153 -- # waitforlisten 154930 /var/tmp/spdk2.sock 00:09:33.813 10:04:47 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:33.813 10:04:47 -- common/autotest_common.sh@819 -- # '[' -z 154930 ']' 00:09:33.813 10:04:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:33.813 10:04:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:33.813 10:04:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:33.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:33.813 10:04:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:33.813 10:04:47 -- common/autotest_common.sh@10 -- # set +x 00:09:34.072 [2024-04-24 10:04:47.115423] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:34.072 [2024-04-24 10:04:47.115472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154930 ] 00:09:34.072 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.072 [2024-04-24 10:04:47.190943] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.072 [2024-04-24 10:04:47.190975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.072 [2024-04-24 10:04:47.334181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:34.072 [2024-04-24 10:04:47.334342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.072 [2024-04-24 10:04:47.334459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.072 [2024-04-24 10:04:47.334460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:34.639 10:04:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.639 10:04:47 -- common/autotest_common.sh@852 -- # return 0 00:09:34.639 10:04:47 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:34.639 10:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:34.639 10:04:47 -- common/autotest_common.sh@10 -- # set +x 00:09:34.899 10:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:34.899 10:04:47 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:34.899 10:04:47 -- common/autotest_common.sh@640 -- # local es=0 00:09:34.899 10:04:47 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:34.899 10:04:47 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:09:34.899 10:04:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:34.899 10:04:47 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:09:34.899 10:04:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:34.899 10:04:47 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:34.899 10:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:34.899 10:04:47 -- common/autotest_common.sh@10 -- # set +x 00:09:34.899 [2024-04-24 10:04:47.935139] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 154694 has claimed it. 00:09:34.899 request: 00:09:34.899 { 00:09:34.899 "method": "framework_enable_cpumask_locks", 00:09:34.899 "req_id": 1 00:09:34.899 } 00:09:34.899 Got JSON-RPC error response 00:09:34.899 response: 00:09:34.899 { 00:09:34.899 "code": -32603, 00:09:34.899 "message": "Failed to claim CPU core: 2" 00:09:34.899 } 00:09:34.899 10:04:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:34.899 10:04:47 -- common/autotest_common.sh@643 -- # es=1 00:09:34.899 10:04:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:34.899 10:04:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:34.899 10:04:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:34.899 10:04:47 -- event/cpu_locks.sh@158 -- # waitforlisten 154694 /var/tmp/spdk.sock 00:09:34.899 10:04:47 -- common/autotest_common.sh@819 -- # '[' -z 154694 ']' 00:09:34.899 10:04:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.899 10:04:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:34.899 10:04:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.899 10:04:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:34.899 10:04:47 -- common/autotest_common.sh@10 -- # set +x 00:09:34.899 10:04:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.899 10:04:48 -- common/autotest_common.sh@852 -- # return 0 00:09:34.899 10:04:48 -- event/cpu_locks.sh@159 -- # waitforlisten 154930 /var/tmp/spdk2.sock 00:09:34.899 10:04:48 -- common/autotest_common.sh@819 -- # '[' -z 154930 ']' 00:09:34.899 10:04:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:34.899 10:04:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:34.899 10:04:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:34.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:34.899 10:04:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:34.899 10:04:48 -- common/autotest_common.sh@10 -- # set +x 00:09:35.161 10:04:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:35.161 10:04:48 -- common/autotest_common.sh@852 -- # return 0 00:09:35.161 10:04:48 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:35.161 10:04:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:35.161 10:04:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:35.161 10:04:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:35.161 00:09:35.161 real 0m2.102s 00:09:35.161 user 0m0.875s 00:09:35.161 sys 0m0.156s 00:09:35.161 10:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.161 10:04:48 -- common/autotest_common.sh@10 -- # set +x 00:09:35.161 ************************************ 00:09:35.161 END TEST locking_overlapped_coremask_via_rpc 00:09:35.161 ************************************ 00:09:35.161 10:04:48 -- event/cpu_locks.sh@174 -- # cleanup 00:09:35.161 10:04:48 -- event/cpu_locks.sh@15 -- # [[ -z 154694 ]] 00:09:35.161 10:04:48 -- event/cpu_locks.sh@15 -- # killprocess 154694 00:09:35.161 10:04:48 -- common/autotest_common.sh@926 -- # '[' -z 154694 ']' 00:09:35.161 10:04:48 -- common/autotest_common.sh@930 -- # kill -0 154694 00:09:35.161 10:04:48 -- common/autotest_common.sh@931 -- # uname 00:09:35.161 10:04:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:35.161 10:04:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154694 00:09:35.161 10:04:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:35.161 10:04:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:35.161 10:04:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154694' 00:09:35.161 killing process with pid 154694 00:09:35.161 10:04:48 -- common/autotest_common.sh@945 -- # kill 154694 00:09:35.161 10:04:48 -- common/autotest_common.sh@950 -- # wait 154694 00:09:35.730 10:04:48 -- event/cpu_locks.sh@16 -- # [[ -z 154930 ]] 00:09:35.730 10:04:48 -- event/cpu_locks.sh@16 -- # killprocess 154930 00:09:35.730 10:04:48 -- common/autotest_common.sh@926 -- # '[' -z 154930 ']' 00:09:35.730 10:04:48 -- common/autotest_common.sh@930 -- # kill -0 154930 00:09:35.730 10:04:48 -- common/autotest_common.sh@931 -- # uname 00:09:35.730 10:04:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:35.730 10:04:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154930 00:09:35.730 10:04:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:35.730 10:04:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:35.730 10:04:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154930' 00:09:35.730 killing process with pid 154930 00:09:35.730 10:04:48 -- common/autotest_common.sh@945 -- # kill 154930 00:09:35.730 10:04:48 -- common/autotest_common.sh@950 -- # wait 154930 00:09:35.990 10:04:49 -- event/cpu_locks.sh@18 -- # rm -f 00:09:35.990 10:04:49 -- event/cpu_locks.sh@1 -- # cleanup 00:09:35.990 10:04:49 -- event/cpu_locks.sh@15 -- # [[ -z 154694 ]] 00:09:35.990 10:04:49 -- event/cpu_locks.sh@15 -- # killprocess 154694 00:09:35.990 10:04:49 -- common/autotest_common.sh@926 -- # '[' -z 154694 ']' 00:09:35.990 10:04:49 -- common/autotest_common.sh@930 -- # kill -0 154694 00:09:35.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (154694) - No such process 00:09:35.990 10:04:49 -- common/autotest_common.sh@953 -- # echo 'Process with pid 154694 is not found' 00:09:35.990 Process with pid 154694 is not found 00:09:35.990 10:04:49 -- event/cpu_locks.sh@16 -- # [[ -z 154930 ]] 00:09:35.990 10:04:49 -- event/cpu_locks.sh@16 -- # killprocess 154930 00:09:35.990 10:04:49 -- common/autotest_common.sh@926 -- # '[' -z 154930 ']' 00:09:35.990 10:04:49 -- common/autotest_common.sh@930 -- # kill -0 154930 00:09:35.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (154930) - No such process 00:09:35.990 10:04:49 -- common/autotest_common.sh@953 -- # echo 'Process with pid 154930 is not found' 00:09:35.990 Process with pid 154930 is not found 00:09:35.990 10:04:49 -- event/cpu_locks.sh@18 -- # rm -f 00:09:35.990 00:09:35.990 real 0m16.334s 00:09:35.990 user 0m28.862s 00:09:35.990 sys 0m4.327s 00:09:35.990 10:04:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.990 10:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:35.990 ************************************ 00:09:35.990 END TEST cpu_locks 00:09:35.990 ************************************ 00:09:35.990 00:09:35.990 real 0m41.594s 00:09:35.990 user 1m20.907s 00:09:35.990 sys 0m7.398s 00:09:35.990 10:04:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.990 10:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:35.990 ************************************ 00:09:35.990 END TEST event 00:09:35.990 ************************************ 00:09:35.990 10:04:49 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:35.990 10:04:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:35.990 10:04:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.990 10:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:35.990 ************************************ 00:09:35.990 START TEST thread 00:09:35.990 ************************************ 00:09:35.990 10:04:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:36.249 * Looking for test storage... 00:09:36.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:36.249 10:04:49 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:36.249 10:04:49 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:36.249 10:04:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:36.249 10:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:36.249 ************************************ 00:09:36.249 START TEST thread_poller_perf 00:09:36.249 ************************************ 00:09:36.249 10:04:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:36.249 [2024-04-24 10:04:49.316882] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:36.249 [2024-04-24 10:04:49.316967] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155420 ] 00:09:36.249 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.249 [2024-04-24 10:04:49.374885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.249 [2024-04-24 10:04:49.447529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.249 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:37.627 ====================================== 00:09:37.627 busy:2309742396 (cyc) 00:09:37.627 total_run_count: 385000 00:09:37.627 tsc_hz: 2300000000 (cyc) 00:09:37.627 ====================================== 00:09:37.627 poller_cost: 5999 (cyc), 2608 (nsec) 00:09:37.627 00:09:37.627 real 0m1.254s 00:09:37.627 user 0m1.176s 00:09:37.627 sys 0m0.074s 00:09:37.627 10:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.627 10:04:50 -- common/autotest_common.sh@10 -- # set +x 00:09:37.627 ************************************ 00:09:37.627 END TEST thread_poller_perf 00:09:37.627 ************************************ 00:09:37.627 10:04:50 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:37.627 10:04:50 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:37.627 10:04:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:37.627 10:04:50 -- common/autotest_common.sh@10 -- # set +x 00:09:37.627 ************************************ 00:09:37.627 START TEST thread_poller_perf 00:09:37.627 ************************************ 00:09:37.627 10:04:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:37.627 [2024-04-24 10:04:50.610151] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:37.627 [2024-04-24 10:04:50.610230] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155639 ] 00:09:37.627 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.627 [2024-04-24 10:04:50.666786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.627 [2024-04-24 10:04:50.744881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.627 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:38.564 ====================================== 00:09:38.564 busy:2302051984 (cyc) 00:09:38.564 total_run_count: 5264000 00:09:38.564 tsc_hz: 2300000000 (cyc) 00:09:38.564 ====================================== 00:09:38.564 poller_cost: 437 (cyc), 190 (nsec) 00:09:38.564 00:09:38.564 real 0m1.245s 00:09:38.564 user 0m1.166s 00:09:38.564 sys 0m0.076s 00:09:38.564 10:04:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.564 10:04:51 -- common/autotest_common.sh@10 -- # set +x 00:09:38.564 ************************************ 00:09:38.564 END TEST thread_poller_perf 00:09:38.564 ************************************ 00:09:38.824 10:04:51 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:38.825 00:09:38.825 real 0m2.658s 00:09:38.825 user 0m2.398s 00:09:38.825 sys 0m0.273s 00:09:38.825 10:04:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.825 10:04:51 -- common/autotest_common.sh@10 -- # set +x 00:09:38.825 ************************************ 00:09:38.825 END TEST thread 00:09:38.825 ************************************ 00:09:38.825 10:04:51 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:38.825 10:04:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:38.825 10:04:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.825 10:04:51 -- common/autotest_common.sh@10 -- # set +x 00:09:38.825 ************************************ 00:09:38.825 START TEST accel 00:09:38.825 ************************************ 00:09:38.825 10:04:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:38.825 * Looking for test storage... 00:09:38.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:09:38.825 10:04:51 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:38.825 10:04:51 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:38.825 10:04:51 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:38.825 10:04:51 -- accel/accel.sh@59 -- # spdk_tgt_pid=155952 00:09:38.825 10:04:51 -- accel/accel.sh@60 -- # waitforlisten 155952 00:09:38.825 10:04:51 -- common/autotest_common.sh@819 -- # '[' -z 155952 ']' 00:09:38.825 10:04:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.825 10:04:51 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:38.825 10:04:51 -- accel/accel.sh@58 -- # build_accel_config 00:09:38.825 10:04:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.825 10:04:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.825 10:04:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:38.825 10:04:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.825 10:04:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:38.825 10:04:51 -- common/autotest_common.sh@10 -- # set +x 00:09:38.825 10:04:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:38.825 10:04:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:38.825 10:04:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:38.825 10:04:51 -- accel/accel.sh@41 -- # local IFS=, 00:09:38.825 10:04:51 -- accel/accel.sh@42 -- # jq -r . 00:09:38.825 [2024-04-24 10:04:52.030397] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:38.825 [2024-04-24 10:04:52.030448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155952 ] 00:09:38.825 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.825 [2024-04-24 10:04:52.085336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.084 [2024-04-24 10:04:52.161490] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.084 [2024-04-24 10:04:52.161607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.648 10:04:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.648 10:04:52 -- common/autotest_common.sh@852 -- # return 0 00:09:39.648 10:04:52 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:39.648 10:04:52 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:39.648 10:04:52 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:39.648 10:04:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:39.648 10:04:52 -- common/autotest_common.sh@10 -- # set +x 00:09:39.648 10:04:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:39.648 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.648 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.648 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.648 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.648 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.648 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # IFS== 00:09:39.649 10:04:52 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.649 10:04:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.649 10:04:52 -- accel/accel.sh@67 -- # killprocess 155952 00:09:39.649 10:04:52 -- common/autotest_common.sh@926 -- # '[' -z 155952 ']' 00:09:39.649 10:04:52 -- common/autotest_common.sh@930 -- # kill -0 155952 00:09:39.649 10:04:52 -- common/autotest_common.sh@931 -- # uname 00:09:39.649 10:04:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:39.649 10:04:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 155952 00:09:39.649 10:04:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:39.649 10:04:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:39.649 10:04:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 155952' 00:09:39.649 killing process with pid 155952 00:09:39.649 10:04:52 -- common/autotest_common.sh@945 -- # kill 155952 00:09:39.649 10:04:52 -- common/autotest_common.sh@950 -- # wait 155952 00:09:40.217 10:04:53 -- accel/accel.sh@68 -- # trap - ERR 00:09:40.217 10:04:53 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:40.217 10:04:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:40.217 10:04:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.217 10:04:53 -- common/autotest_common.sh@10 -- # set +x 00:09:40.217 10:04:53 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:09:40.217 10:04:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:40.217 10:04:53 -- accel/accel.sh@12 -- # build_accel_config 00:09:40.217 10:04:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:40.217 10:04:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:40.217 10:04:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:40.217 10:04:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:40.217 10:04:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:40.217 10:04:53 -- accel/accel.sh@41 -- # local IFS=, 00:09:40.217 10:04:53 -- accel/accel.sh@42 -- # jq -r . 00:09:40.217 10:04:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.217 10:04:53 -- common/autotest_common.sh@10 -- # set +x 00:09:40.217 10:04:53 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:40.217 10:04:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:40.217 10:04:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.217 10:04:53 -- common/autotest_common.sh@10 -- # set +x 00:09:40.217 ************************************ 00:09:40.217 START TEST accel_missing_filename 00:09:40.217 ************************************ 00:09:40.217 10:04:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:09:40.217 10:04:53 -- common/autotest_common.sh@640 -- # local es=0 00:09:40.217 10:04:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:40.217 10:04:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:40.217 10:04:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.217 10:04:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:40.217 10:04:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.217 10:04:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:09:40.217 10:04:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:40.217 10:04:53 -- accel/accel.sh@12 -- # build_accel_config 00:09:40.217 10:04:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:40.217 10:04:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:40.217 10:04:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:40.217 10:04:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:40.217 10:04:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:40.217 10:04:53 -- accel/accel.sh@41 -- # local IFS=, 00:09:40.217 10:04:53 -- accel/accel.sh@42 -- # jq -r . 00:09:40.217 [2024-04-24 10:04:53.344349] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:40.217 [2024-04-24 10:04:53.344398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156219 ] 00:09:40.217 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.217 [2024-04-24 10:04:53.397829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.217 [2024-04-24 10:04:53.469383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.474 [2024-04-24 10:04:53.510994] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.474 [2024-04-24 10:04:53.571058] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:40.474 A filename is required. 00:09:40.474 10:04:53 -- common/autotest_common.sh@643 -- # es=234 00:09:40.474 10:04:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:40.474 10:04:53 -- common/autotest_common.sh@652 -- # es=106 00:09:40.474 10:04:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:40.474 10:04:53 -- common/autotest_common.sh@660 -- # es=1 00:09:40.474 10:04:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:40.474 00:09:40.474 real 0m0.339s 00:09:40.474 user 0m0.267s 00:09:40.474 sys 0m0.109s 00:09:40.474 10:04:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.474 10:04:53 -- common/autotest_common.sh@10 -- # set +x 00:09:40.475 ************************************ 00:09:40.475 END TEST accel_missing_filename 00:09:40.475 ************************************ 00:09:40.475 10:04:53 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:40.475 10:04:53 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:40.475 10:04:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.475 10:04:53 -- common/autotest_common.sh@10 -- # set +x 00:09:40.475 ************************************ 00:09:40.475 START TEST accel_compress_verify 00:09:40.475 ************************************ 00:09:40.475 10:04:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:40.475 10:04:53 -- common/autotest_common.sh@640 -- # local es=0 00:09:40.475 10:04:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:40.475 10:04:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:40.475 10:04:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.475 10:04:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:40.475 10:04:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.475 10:04:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:40.475 10:04:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:40.475 10:04:53 -- accel/accel.sh@12 -- # build_accel_config 00:09:40.475 10:04:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:40.475 10:04:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:40.475 10:04:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:40.475 10:04:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:40.475 10:04:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:40.475 10:04:53 -- accel/accel.sh@41 -- # local IFS=, 00:09:40.475 10:04:53 -- accel/accel.sh@42 -- # jq -r . 00:09:40.475 [2024-04-24 10:04:53.732984] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:40.475 [2024-04-24 10:04:53.733057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156319 ] 00:09:40.732 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.732 [2024-04-24 10:04:53.790457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.732 [2024-04-24 10:04:53.860331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.732 [2024-04-24 10:04:53.901055] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.732 [2024-04-24 10:04:53.960755] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:40.991 00:09:40.991 Compression does not support the verify option, aborting. 00:09:40.991 10:04:54 -- common/autotest_common.sh@643 -- # es=161 00:09:40.991 10:04:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:40.991 10:04:54 -- common/autotest_common.sh@652 -- # es=33 00:09:40.991 10:04:54 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:40.991 10:04:54 -- common/autotest_common.sh@660 -- # es=1 00:09:40.991 10:04:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:40.991 00:09:40.991 real 0m0.353s 00:09:40.991 user 0m0.275s 00:09:40.991 sys 0m0.118s 00:09:40.991 10:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.991 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.991 ************************************ 00:09:40.991 END TEST accel_compress_verify 00:09:40.991 ************************************ 00:09:40.991 10:04:54 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:40.991 10:04:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:40.991 10:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.991 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.991 ************************************ 00:09:40.991 START TEST accel_wrong_workload 00:09:40.991 ************************************ 00:09:40.991 10:04:54 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:09:40.991 10:04:54 -- common/autotest_common.sh@640 -- # local es=0 00:09:40.991 10:04:54 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:40.991 10:04:54 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:40.991 10:04:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.991 10:04:54 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:40.991 10:04:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.991 10:04:54 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:09:40.991 10:04:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:40.991 10:04:54 -- accel/accel.sh@12 -- # build_accel_config 00:09:40.992 10:04:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:40.992 10:04:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:40.992 10:04:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:40.992 10:04:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:40.992 10:04:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:40.992 10:04:54 -- accel/accel.sh@41 -- # local IFS=, 00:09:40.992 10:04:54 -- accel/accel.sh@42 -- # jq -r . 00:09:40.992 Unsupported workload type: foobar 00:09:40.992 [2024-04-24 10:04:54.119633] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:40.992 accel_perf options: 00:09:40.992 [-h help message] 00:09:40.992 [-q queue depth per core] 00:09:40.992 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:40.992 [-T number of threads per core 00:09:40.992 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:40.992 [-t time in seconds] 00:09:40.992 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:40.992 [ dif_verify, , dif_generate, dif_generate_copy 00:09:40.992 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:40.992 [-l for compress/decompress workloads, name of uncompressed input file 00:09:40.992 [-S for crc32c workload, use this seed value (default 0) 00:09:40.992 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:40.992 [-f for fill workload, use this BYTE value (default 255) 00:09:40.992 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:40.992 [-y verify result if this switch is on] 00:09:40.992 [-a tasks to allocate per core (default: same value as -q)] 00:09:40.992 Can be used to spread operations across a wider range of memory. 00:09:40.992 10:04:54 -- common/autotest_common.sh@643 -- # es=1 00:09:40.992 10:04:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:40.992 10:04:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:40.992 10:04:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:40.992 00:09:40.992 real 0m0.033s 00:09:40.992 user 0m0.017s 00:09:40.992 sys 0m0.016s 00:09:40.992 10:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.992 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.992 ************************************ 00:09:40.992 END TEST accel_wrong_workload 00:09:40.992 ************************************ 00:09:40.992 Error: writing output failed: Broken pipe 00:09:40.992 10:04:54 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:40.992 10:04:54 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:40.992 10:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.992 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.992 ************************************ 00:09:40.992 START TEST accel_negative_buffers 00:09:40.992 ************************************ 00:09:40.992 10:04:54 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:40.992 10:04:54 -- common/autotest_common.sh@640 -- # local es=0 00:09:40.992 10:04:54 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:40.992 10:04:54 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:40.992 10:04:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.992 10:04:54 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:40.992 10:04:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:40.992 10:04:54 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:09:40.992 10:04:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:40.992 10:04:54 -- accel/accel.sh@12 -- # build_accel_config 00:09:40.992 10:04:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:40.992 10:04:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:40.992 10:04:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:40.992 10:04:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:40.992 10:04:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:40.992 10:04:54 -- accel/accel.sh@41 -- # local IFS=, 00:09:40.992 10:04:54 -- accel/accel.sh@42 -- # jq -r . 00:09:40.992 -x option must be non-negative. 00:09:40.992 [2024-04-24 10:04:54.191849] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:40.992 accel_perf options: 00:09:40.992 [-h help message] 00:09:40.992 [-q queue depth per core] 00:09:40.992 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:40.992 [-T number of threads per core 00:09:40.992 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:40.992 [-t time in seconds] 00:09:40.992 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:40.992 [ dif_verify, , dif_generate, dif_generate_copy 00:09:40.992 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:40.992 [-l for compress/decompress workloads, name of uncompressed input file 00:09:40.992 [-S for crc32c workload, use this seed value (default 0) 00:09:40.992 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:40.992 [-f for fill workload, use this BYTE value (default 255) 00:09:40.992 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:40.992 [-y verify result if this switch is on] 00:09:40.992 [-a tasks to allocate per core (default: same value as -q)] 00:09:40.992 Can be used to spread operations across a wider range of memory. 00:09:40.992 10:04:54 -- common/autotest_common.sh@643 -- # es=1 00:09:40.992 10:04:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:40.992 10:04:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:40.992 10:04:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:40.992 00:09:40.992 real 0m0.034s 00:09:40.992 user 0m0.020s 00:09:40.992 sys 0m0.013s 00:09:40.992 10:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.992 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.992 ************************************ 00:09:40.992 END TEST accel_negative_buffers 00:09:40.992 ************************************ 00:09:40.992 Error: writing output failed: Broken pipe 00:09:40.992 10:04:54 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:40.992 10:04:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:40.992 10:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.992 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.992 ************************************ 00:09:40.992 START TEST accel_crc32c 00:09:40.992 ************************************ 00:09:40.992 10:04:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:40.992 10:04:54 -- accel/accel.sh@16 -- # local accel_opc 00:09:40.992 10:04:54 -- accel/accel.sh@17 -- # local accel_module 00:09:40.992 10:04:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:40.992 10:04:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:40.992 10:04:54 -- accel/accel.sh@12 -- # build_accel_config 00:09:40.992 10:04:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:40.992 10:04:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:40.992 10:04:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:40.992 10:04:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:40.992 10:04:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:40.992 10:04:54 -- accel/accel.sh@41 -- # local IFS=, 00:09:40.992 10:04:54 -- accel/accel.sh@42 -- # jq -r . 00:09:40.992 [2024-04-24 10:04:54.259905] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:40.992 [2024-04-24 10:04:54.259961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156379 ] 00:09:41.250 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.250 [2024-04-24 10:04:54.316413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.250 [2024-04-24 10:04:54.391260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.622 10:04:55 -- accel/accel.sh@18 -- # out=' 00:09:42.622 SPDK Configuration: 00:09:42.622 Core mask: 0x1 00:09:42.622 00:09:42.622 Accel Perf Configuration: 00:09:42.622 Workload Type: crc32c 00:09:42.622 CRC-32C seed: 32 00:09:42.622 Transfer size: 4096 bytes 00:09:42.622 Vector count 1 00:09:42.622 Module: software 00:09:42.622 Queue depth: 32 00:09:42.622 Allocate depth: 32 00:09:42.622 # threads/core: 1 00:09:42.622 Run time: 1 seconds 00:09:42.622 Verify: Yes 00:09:42.622 00:09:42.622 Running for 1 seconds... 00:09:42.622 00:09:42.622 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:42.622 ------------------------------------------------------------------------------------ 00:09:42.622 0,0 573792/s 2241 MiB/s 0 0 00:09:42.622 ==================================================================================== 00:09:42.622 Total 573792/s 2241 MiB/s 0 0' 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:42.622 10:04:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:42.622 10:04:55 -- accel/accel.sh@12 -- # build_accel_config 00:09:42.622 10:04:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:42.622 10:04:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:42.622 10:04:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.622 10:04:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:42.622 10:04:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:42.622 10:04:55 -- accel/accel.sh@41 -- # local IFS=, 00:09:42.622 10:04:55 -- accel/accel.sh@42 -- # jq -r . 00:09:42.622 [2024-04-24 10:04:55.618734] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:42.622 [2024-04-24 10:04:55.618812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156619 ] 00:09:42.622 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.622 [2024-04-24 10:04:55.675446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.622 [2024-04-24 10:04:55.744293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val= 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val= 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val=0x1 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val= 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val= 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val=crc32c 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val=32 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val= 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val=software 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@23 -- # accel_module=software 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val=32 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val=32 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val=1 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val=Yes 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val= 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:42.622 10:04:55 -- accel/accel.sh@21 -- # val= 00:09:42.622 10:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # IFS=: 00:09:42.622 10:04:55 -- accel/accel.sh@20 -- # read -r var val 00:09:43.999 10:04:56 -- accel/accel.sh@21 -- # val= 00:09:43.999 10:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # IFS=: 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # read -r var val 00:09:43.999 10:04:56 -- accel/accel.sh@21 -- # val= 00:09:43.999 10:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # IFS=: 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # read -r var val 00:09:43.999 10:04:56 -- accel/accel.sh@21 -- # val= 00:09:43.999 10:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # IFS=: 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # read -r var val 00:09:43.999 10:04:56 -- accel/accel.sh@21 -- # val= 00:09:43.999 10:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # IFS=: 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # read -r var val 00:09:43.999 10:04:56 -- accel/accel.sh@21 -- # val= 00:09:43.999 10:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # IFS=: 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # read -r var val 00:09:43.999 10:04:56 -- accel/accel.sh@21 -- # val= 00:09:43.999 10:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # IFS=: 00:09:43.999 10:04:56 -- accel/accel.sh@20 -- # read -r var val 00:09:43.999 10:04:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:43.999 10:04:56 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:43.999 10:04:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:43.999 00:09:43.999 real 0m2.712s 00:09:43.999 user 0m2.495s 00:09:43.999 sys 0m0.224s 00:09:43.999 10:04:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.999 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:09:43.999 ************************************ 00:09:43.999 END TEST accel_crc32c 00:09:43.999 ************************************ 00:09:43.999 10:04:56 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:43.999 10:04:56 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:43.999 10:04:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:43.999 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:09:43.999 ************************************ 00:09:43.999 START TEST accel_crc32c_C2 00:09:43.999 ************************************ 00:09:43.999 10:04:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:43.999 10:04:56 -- accel/accel.sh@16 -- # local accel_opc 00:09:43.999 10:04:56 -- accel/accel.sh@17 -- # local accel_module 00:09:43.999 10:04:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:43.999 10:04:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:43.999 10:04:56 -- accel/accel.sh@12 -- # build_accel_config 00:09:43.999 10:04:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:43.999 10:04:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:43.999 10:04:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:43.999 10:04:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:43.999 10:04:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:43.999 10:04:56 -- accel/accel.sh@41 -- # local IFS=, 00:09:43.999 10:04:56 -- accel/accel.sh@42 -- # jq -r . 00:09:43.999 [2024-04-24 10:04:57.009647] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:44.000 [2024-04-24 10:04:57.009705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156867 ] 00:09:44.000 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.000 [2024-04-24 10:04:57.063397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.000 [2024-04-24 10:04:57.132747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.376 10:04:58 -- accel/accel.sh@18 -- # out=' 00:09:45.376 SPDK Configuration: 00:09:45.376 Core mask: 0x1 00:09:45.376 00:09:45.376 Accel Perf Configuration: 00:09:45.376 Workload Type: crc32c 00:09:45.376 CRC-32C seed: 0 00:09:45.376 Transfer size: 4096 bytes 00:09:45.376 Vector count 2 00:09:45.376 Module: software 00:09:45.376 Queue depth: 32 00:09:45.376 Allocate depth: 32 00:09:45.376 # threads/core: 1 00:09:45.376 Run time: 1 seconds 00:09:45.376 Verify: Yes 00:09:45.376 00:09:45.376 Running for 1 seconds... 00:09:45.376 00:09:45.376 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:45.376 ------------------------------------------------------------------------------------ 00:09:45.376 0,0 451520/s 3527 MiB/s 0 0 00:09:45.376 ==================================================================================== 00:09:45.376 Total 451520/s 1763 MiB/s 0 0' 00:09:45.376 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.376 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.376 10:04:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:45.376 10:04:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:45.376 10:04:58 -- accel/accel.sh@12 -- # build_accel_config 00:09:45.376 10:04:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:45.376 10:04:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:45.376 10:04:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:45.376 10:04:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:45.376 10:04:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:45.376 10:04:58 -- accel/accel.sh@41 -- # local IFS=, 00:09:45.376 10:04:58 -- accel/accel.sh@42 -- # jq -r . 00:09:45.376 [2024-04-24 10:04:58.359851] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:45.376 [2024-04-24 10:04:58.359933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157104 ] 00:09:45.376 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.376 [2024-04-24 10:04:58.416203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.376 [2024-04-24 10:04:58.484006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.376 10:04:58 -- accel/accel.sh@21 -- # val= 00:09:45.376 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val= 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val=0x1 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val= 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val= 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val=crc32c 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val=0 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val= 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val=software 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@23 -- # accel_module=software 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val=32 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val=32 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val=1 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val=Yes 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val= 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:45.377 10:04:58 -- accel/accel.sh@21 -- # val= 00:09:45.377 10:04:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # IFS=: 00:09:45.377 10:04:58 -- accel/accel.sh@20 -- # read -r var val 00:09:46.753 10:04:59 -- accel/accel.sh@21 -- # val= 00:09:46.753 10:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # IFS=: 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # read -r var val 00:09:46.753 10:04:59 -- accel/accel.sh@21 -- # val= 00:09:46.753 10:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # IFS=: 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # read -r var val 00:09:46.753 10:04:59 -- accel/accel.sh@21 -- # val= 00:09:46.753 10:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # IFS=: 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # read -r var val 00:09:46.753 10:04:59 -- accel/accel.sh@21 -- # val= 00:09:46.753 10:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # IFS=: 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # read -r var val 00:09:46.753 10:04:59 -- accel/accel.sh@21 -- # val= 00:09:46.753 10:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # IFS=: 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # read -r var val 00:09:46.753 10:04:59 -- accel/accel.sh@21 -- # val= 00:09:46.753 10:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # IFS=: 00:09:46.753 10:04:59 -- accel/accel.sh@20 -- # read -r var val 00:09:46.753 10:04:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:46.753 10:04:59 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:46.753 10:04:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:46.753 00:09:46.753 real 0m2.703s 00:09:46.753 user 0m2.499s 00:09:46.753 sys 0m0.210s 00:09:46.753 10:04:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.753 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:09:46.753 ************************************ 00:09:46.753 END TEST accel_crc32c_C2 00:09:46.753 ************************************ 00:09:46.753 10:04:59 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:46.753 10:04:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:46.753 10:04:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:46.753 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:09:46.753 ************************************ 00:09:46.753 START TEST accel_copy 00:09:46.753 ************************************ 00:09:46.753 10:04:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:09:46.753 10:04:59 -- accel/accel.sh@16 -- # local accel_opc 00:09:46.753 10:04:59 -- accel/accel.sh@17 -- # local accel_module 00:09:46.753 10:04:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:09:46.753 10:04:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:46.753 10:04:59 -- accel/accel.sh@12 -- # build_accel_config 00:09:46.753 10:04:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:46.753 10:04:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:46.753 10:04:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:46.753 10:04:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:46.753 10:04:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:46.753 10:04:59 -- accel/accel.sh@41 -- # local IFS=, 00:09:46.753 10:04:59 -- accel/accel.sh@42 -- # jq -r . 00:09:46.753 [2024-04-24 10:04:59.753312] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:46.753 [2024-04-24 10:04:59.753373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157357 ] 00:09:46.753 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.753 [2024-04-24 10:04:59.808022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.753 [2024-04-24 10:04:59.875126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.127 10:05:01 -- accel/accel.sh@18 -- # out=' 00:09:48.127 SPDK Configuration: 00:09:48.127 Core mask: 0x1 00:09:48.127 00:09:48.127 Accel Perf Configuration: 00:09:48.127 Workload Type: copy 00:09:48.127 Transfer size: 4096 bytes 00:09:48.127 Vector count 1 00:09:48.127 Module: software 00:09:48.127 Queue depth: 32 00:09:48.127 Allocate depth: 32 00:09:48.127 # threads/core: 1 00:09:48.127 Run time: 1 seconds 00:09:48.127 Verify: Yes 00:09:48.127 00:09:48.127 Running for 1 seconds... 00:09:48.127 00:09:48.127 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:48.127 ------------------------------------------------------------------------------------ 00:09:48.127 0,0 407968/s 1593 MiB/s 0 0 00:09:48.127 ==================================================================================== 00:09:48.127 Total 407968/s 1593 MiB/s 0 0' 00:09:48.127 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.127 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.127 10:05:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:48.127 10:05:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:48.127 10:05:01 -- accel/accel.sh@12 -- # build_accel_config 00:09:48.127 10:05:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:48.127 10:05:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:48.127 10:05:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:48.127 10:05:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:48.127 10:05:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:48.127 10:05:01 -- accel/accel.sh@41 -- # local IFS=, 00:09:48.127 10:05:01 -- accel/accel.sh@42 -- # jq -r . 00:09:48.127 [2024-04-24 10:05:01.102135] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:48.128 [2024-04-24 10:05:01.102224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157591 ] 00:09:48.128 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.128 [2024-04-24 10:05:01.158032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.128 [2024-04-24 10:05:01.226588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val= 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val= 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val=0x1 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val= 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val= 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val=copy 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@24 -- # accel_opc=copy 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val= 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val=software 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@23 -- # accel_module=software 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val=32 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val=32 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val=1 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val=Yes 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val= 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:48.128 10:05:01 -- accel/accel.sh@21 -- # val= 00:09:48.128 10:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # IFS=: 00:09:48.128 10:05:01 -- accel/accel.sh@20 -- # read -r var val 00:09:49.502 10:05:02 -- accel/accel.sh@21 -- # val= 00:09:49.502 10:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # IFS=: 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # read -r var val 00:09:49.502 10:05:02 -- accel/accel.sh@21 -- # val= 00:09:49.502 10:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # IFS=: 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # read -r var val 00:09:49.502 10:05:02 -- accel/accel.sh@21 -- # val= 00:09:49.502 10:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # IFS=: 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # read -r var val 00:09:49.502 10:05:02 -- accel/accel.sh@21 -- # val= 00:09:49.502 10:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # IFS=: 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # read -r var val 00:09:49.502 10:05:02 -- accel/accel.sh@21 -- # val= 00:09:49.502 10:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # IFS=: 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # read -r var val 00:09:49.502 10:05:02 -- accel/accel.sh@21 -- # val= 00:09:49.502 10:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # IFS=: 00:09:49.502 10:05:02 -- accel/accel.sh@20 -- # read -r var val 00:09:49.502 10:05:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:49.502 10:05:02 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:09:49.502 10:05:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:49.502 00:09:49.502 real 0m2.704s 00:09:49.502 user 0m2.483s 00:09:49.502 sys 0m0.228s 00:09:49.502 10:05:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.502 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:49.502 ************************************ 00:09:49.502 END TEST accel_copy 00:09:49.502 ************************************ 00:09:49.502 10:05:02 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.502 10:05:02 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:49.502 10:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.502 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:09:49.502 ************************************ 00:09:49.502 START TEST accel_fill 00:09:49.502 ************************************ 00:09:49.502 10:05:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.502 10:05:02 -- accel/accel.sh@16 -- # local accel_opc 00:09:49.502 10:05:02 -- accel/accel.sh@17 -- # local accel_module 00:09:49.502 10:05:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.502 10:05:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.502 10:05:02 -- accel/accel.sh@12 -- # build_accel_config 00:09:49.502 10:05:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:49.502 10:05:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.502 10:05:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.502 10:05:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:49.502 10:05:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:49.502 10:05:02 -- accel/accel.sh@41 -- # local IFS=, 00:09:49.502 10:05:02 -- accel/accel.sh@42 -- # jq -r . 00:09:49.502 [2024-04-24 10:05:02.497064] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:49.502 [2024-04-24 10:05:02.497141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157838 ] 00:09:49.502 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.502 [2024-04-24 10:05:02.553169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.502 [2024-04-24 10:05:02.620598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.877 10:05:03 -- accel/accel.sh@18 -- # out=' 00:09:50.877 SPDK Configuration: 00:09:50.877 Core mask: 0x1 00:09:50.877 00:09:50.877 Accel Perf Configuration: 00:09:50.877 Workload Type: fill 00:09:50.877 Fill pattern: 0x80 00:09:50.877 Transfer size: 4096 bytes 00:09:50.877 Vector count 1 00:09:50.877 Module: software 00:09:50.877 Queue depth: 64 00:09:50.877 Allocate depth: 64 00:09:50.877 # threads/core: 1 00:09:50.877 Run time: 1 seconds 00:09:50.877 Verify: Yes 00:09:50.877 00:09:50.877 Running for 1 seconds... 00:09:50.877 00:09:50.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:50.877 ------------------------------------------------------------------------------------ 00:09:50.877 0,0 652160/s 2547 MiB/s 0 0 00:09:50.877 ==================================================================================== 00:09:50.877 Total 652160/s 2547 MiB/s 0 0' 00:09:50.877 10:05:03 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:03 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:50.877 10:05:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:50.877 10:05:03 -- accel/accel.sh@12 -- # build_accel_config 00:09:50.877 10:05:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:50.877 10:05:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.877 10:05:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.877 10:05:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:50.877 10:05:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:50.877 10:05:03 -- accel/accel.sh@41 -- # local IFS=, 00:09:50.877 10:05:03 -- accel/accel.sh@42 -- # jq -r . 00:09:50.877 [2024-04-24 10:05:03.846175] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:50.877 [2024-04-24 10:05:03.846234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158081 ] 00:09:50.877 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.877 [2024-04-24 10:05:03.900273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.877 [2024-04-24 10:05:03.967953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val= 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val= 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val=0x1 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val= 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val= 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val=fill 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@24 -- # accel_opc=fill 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val=0x80 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val= 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val=software 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@23 -- # accel_module=software 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val=64 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val=64 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val=1 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val=Yes 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val= 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:50.877 10:05:04 -- accel/accel.sh@21 -- # val= 00:09:50.877 10:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # IFS=: 00:09:50.877 10:05:04 -- accel/accel.sh@20 -- # read -r var val 00:09:52.251 10:05:05 -- accel/accel.sh@21 -- # val= 00:09:52.251 10:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # IFS=: 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # read -r var val 00:09:52.251 10:05:05 -- accel/accel.sh@21 -- # val= 00:09:52.251 10:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # IFS=: 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # read -r var val 00:09:52.251 10:05:05 -- accel/accel.sh@21 -- # val= 00:09:52.251 10:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # IFS=: 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # read -r var val 00:09:52.251 10:05:05 -- accel/accel.sh@21 -- # val= 00:09:52.251 10:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # IFS=: 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # read -r var val 00:09:52.251 10:05:05 -- accel/accel.sh@21 -- # val= 00:09:52.251 10:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # IFS=: 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # read -r var val 00:09:52.251 10:05:05 -- accel/accel.sh@21 -- # val= 00:09:52.251 10:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # IFS=: 00:09:52.251 10:05:05 -- accel/accel.sh@20 -- # read -r var val 00:09:52.251 10:05:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:52.251 10:05:05 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:09:52.251 10:05:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:52.251 00:09:52.251 real 0m2.702s 00:09:52.251 user 0m2.479s 00:09:52.251 sys 0m0.231s 00:09:52.251 10:05:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.251 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:52.251 ************************************ 00:09:52.251 END TEST accel_fill 00:09:52.251 ************************************ 00:09:52.251 10:05:05 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:52.251 10:05:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:52.251 10:05:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.251 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:09:52.251 ************************************ 00:09:52.251 START TEST accel_copy_crc32c 00:09:52.251 ************************************ 00:09:52.251 10:05:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:09:52.251 10:05:05 -- accel/accel.sh@16 -- # local accel_opc 00:09:52.251 10:05:05 -- accel/accel.sh@17 -- # local accel_module 00:09:52.251 10:05:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:52.251 10:05:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:52.251 10:05:05 -- accel/accel.sh@12 -- # build_accel_config 00:09:52.251 10:05:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:52.251 10:05:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.252 10:05:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.252 10:05:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:52.252 10:05:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:52.252 10:05:05 -- accel/accel.sh@41 -- # local IFS=, 00:09:52.252 10:05:05 -- accel/accel.sh@42 -- # jq -r . 00:09:52.252 [2024-04-24 10:05:05.234687] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:52.252 [2024-04-24 10:05:05.234752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158328 ] 00:09:52.252 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.252 [2024-04-24 10:05:05.290029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.252 [2024-04-24 10:05:05.359009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.626 10:05:06 -- accel/accel.sh@18 -- # out=' 00:09:53.626 SPDK Configuration: 00:09:53.626 Core mask: 0x1 00:09:53.626 00:09:53.626 Accel Perf Configuration: 00:09:53.626 Workload Type: copy_crc32c 00:09:53.626 CRC-32C seed: 0 00:09:53.626 Vector size: 4096 bytes 00:09:53.626 Transfer size: 4096 bytes 00:09:53.626 Vector count 1 00:09:53.626 Module: software 00:09:53.626 Queue depth: 32 00:09:53.626 Allocate depth: 32 00:09:53.626 # threads/core: 1 00:09:53.626 Run time: 1 seconds 00:09:53.626 Verify: Yes 00:09:53.626 00:09:53.626 Running for 1 seconds... 00:09:53.626 00:09:53.626 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:53.626 ------------------------------------------------------------------------------------ 00:09:53.626 0,0 324736/s 1268 MiB/s 0 0 00:09:53.626 ==================================================================================== 00:09:53.626 Total 324736/s 1268 MiB/s 0 0' 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:53.626 10:05:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:53.626 10:05:06 -- accel/accel.sh@12 -- # build_accel_config 00:09:53.626 10:05:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:53.626 10:05:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.626 10:05:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.626 10:05:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:53.626 10:05:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:53.626 10:05:06 -- accel/accel.sh@41 -- # local IFS=, 00:09:53.626 10:05:06 -- accel/accel.sh@42 -- # jq -r . 00:09:53.626 [2024-04-24 10:05:06.583734] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:53.626 [2024-04-24 10:05:06.583811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158564 ] 00:09:53.626 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.626 [2024-04-24 10:05:06.637928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.626 [2024-04-24 10:05:06.705582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val= 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val= 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val=0x1 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val= 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val= 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val=0 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val= 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val=software 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@23 -- # accel_module=software 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.626 10:05:06 -- accel/accel.sh@21 -- # val=32 00:09:53.626 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.626 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.627 10:05:06 -- accel/accel.sh@21 -- # val=32 00:09:53.627 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.627 10:05:06 -- accel/accel.sh@21 -- # val=1 00:09:53.627 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.627 10:05:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:53.627 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.627 10:05:06 -- accel/accel.sh@21 -- # val=Yes 00:09:53.627 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.627 10:05:06 -- accel/accel.sh@21 -- # val= 00:09:53.627 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:53.627 10:05:06 -- accel/accel.sh@21 -- # val= 00:09:53.627 10:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # IFS=: 00:09:53.627 10:05:06 -- accel/accel.sh@20 -- # read -r var val 00:09:54.630 10:05:07 -- accel/accel.sh@21 -- # val= 00:09:54.630 10:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # IFS=: 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # read -r var val 00:09:54.630 10:05:07 -- accel/accel.sh@21 -- # val= 00:09:54.630 10:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # IFS=: 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # read -r var val 00:09:54.630 10:05:07 -- accel/accel.sh@21 -- # val= 00:09:54.630 10:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # IFS=: 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # read -r var val 00:09:54.630 10:05:07 -- accel/accel.sh@21 -- # val= 00:09:54.630 10:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # IFS=: 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # read -r var val 00:09:54.630 10:05:07 -- accel/accel.sh@21 -- # val= 00:09:54.630 10:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # IFS=: 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # read -r var val 00:09:54.630 10:05:07 -- accel/accel.sh@21 -- # val= 00:09:54.630 10:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # IFS=: 00:09:54.630 10:05:07 -- accel/accel.sh@20 -- # read -r var val 00:09:54.630 10:05:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:54.630 10:05:07 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:54.630 10:05:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:54.630 00:09:54.630 real 0m2.699s 00:09:54.630 user 0m2.487s 00:09:54.630 sys 0m0.220s 00:09:54.630 10:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.630 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:54.630 ************************************ 00:09:54.630 END TEST accel_copy_crc32c 00:09:54.630 ************************************ 00:09:54.889 10:05:07 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:54.889 10:05:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:54.889 10:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.889 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:09:54.889 ************************************ 00:09:54.889 START TEST accel_copy_crc32c_C2 00:09:54.889 ************************************ 00:09:54.889 10:05:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:54.889 10:05:07 -- accel/accel.sh@16 -- # local accel_opc 00:09:54.889 10:05:07 -- accel/accel.sh@17 -- # local accel_module 00:09:54.889 10:05:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:54.889 10:05:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:54.889 10:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:09:54.889 10:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:54.889 10:05:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.889 10:05:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.889 10:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:54.889 10:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:54.889 10:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:09:54.889 10:05:07 -- accel/accel.sh@42 -- # jq -r . 00:09:54.889 [2024-04-24 10:05:07.972289] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:54.889 [2024-04-24 10:05:07.972354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158823 ] 00:09:54.889 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.889 [2024-04-24 10:05:08.029158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.889 [2024-04-24 10:05:08.098887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.263 10:05:09 -- accel/accel.sh@18 -- # out=' 00:09:56.263 SPDK Configuration: 00:09:56.263 Core mask: 0x1 00:09:56.263 00:09:56.263 Accel Perf Configuration: 00:09:56.263 Workload Type: copy_crc32c 00:09:56.263 CRC-32C seed: 0 00:09:56.263 Vector size: 4096 bytes 00:09:56.263 Transfer size: 8192 bytes 00:09:56.263 Vector count 2 00:09:56.263 Module: software 00:09:56.263 Queue depth: 32 00:09:56.263 Allocate depth: 32 00:09:56.263 # threads/core: 1 00:09:56.263 Run time: 1 seconds 00:09:56.263 Verify: Yes 00:09:56.263 00:09:56.263 Running for 1 seconds... 00:09:56.263 00:09:56.263 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:56.263 ------------------------------------------------------------------------------------ 00:09:56.263 0,0 236224/s 1845 MiB/s 0 0 00:09:56.263 ==================================================================================== 00:09:56.263 Total 236224/s 922 MiB/s 0 0' 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:56.263 10:05:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:56.263 10:05:09 -- accel/accel.sh@12 -- # build_accel_config 00:09:56.263 10:05:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:56.263 10:05:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.263 10:05:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.263 10:05:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:56.263 10:05:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:56.263 10:05:09 -- accel/accel.sh@41 -- # local IFS=, 00:09:56.263 10:05:09 -- accel/accel.sh@42 -- # jq -r . 00:09:56.263 [2024-04-24 10:05:09.323910] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:56.263 [2024-04-24 10:05:09.323969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159058 ] 00:09:56.263 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.263 [2024-04-24 10:05:09.377634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.263 [2024-04-24 10:05:09.445779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val= 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val= 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val=0x1 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val= 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val= 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val=0 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val='8192 bytes' 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val= 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val=software 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@23 -- # accel_module=software 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val=32 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val=32 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val=1 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val=Yes 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val= 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:56.263 10:05:09 -- accel/accel.sh@21 -- # val= 00:09:56.263 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:09:56.263 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:09:57.637 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:57.637 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:57.637 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:57.637 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:57.637 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:57.637 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:57.637 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:57.637 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:57.637 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:57.637 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:57.637 10:05:10 -- accel/accel.sh@21 -- # val= 00:09:57.637 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:09:57.637 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:09:57.637 10:05:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:57.637 10:05:10 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:57.637 10:05:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:57.637 00:09:57.637 real 0m2.706s 00:09:57.637 user 0m2.492s 00:09:57.637 sys 0m0.223s 00:09:57.637 10:05:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.637 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:09:57.637 ************************************ 00:09:57.637 END TEST accel_copy_crc32c_C2 00:09:57.637 ************************************ 00:09:57.637 10:05:10 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:57.637 10:05:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:57.637 10:05:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:57.637 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:09:57.637 ************************************ 00:09:57.637 START TEST accel_dualcast 00:09:57.637 ************************************ 00:09:57.637 10:05:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:09:57.637 10:05:10 -- accel/accel.sh@16 -- # local accel_opc 00:09:57.637 10:05:10 -- accel/accel.sh@17 -- # local accel_module 00:09:57.637 10:05:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:09:57.637 10:05:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:57.637 10:05:10 -- accel/accel.sh@12 -- # build_accel_config 00:09:57.637 10:05:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:57.637 10:05:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:57.637 10:05:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:57.637 10:05:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:57.637 10:05:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:57.638 10:05:10 -- accel/accel.sh@41 -- # local IFS=, 00:09:57.638 10:05:10 -- accel/accel.sh@42 -- # jq -r . 00:09:57.638 [2024-04-24 10:05:10.707395] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:57.638 [2024-04-24 10:05:10.707458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159307 ] 00:09:57.638 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.638 [2024-04-24 10:05:10.760237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.638 [2024-04-24 10:05:10.831122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.017 10:05:12 -- accel/accel.sh@18 -- # out=' 00:09:59.017 SPDK Configuration: 00:09:59.017 Core mask: 0x1 00:09:59.017 00:09:59.017 Accel Perf Configuration: 00:09:59.017 Workload Type: dualcast 00:09:59.017 Transfer size: 4096 bytes 00:09:59.017 Vector count 1 00:09:59.017 Module: software 00:09:59.017 Queue depth: 32 00:09:59.017 Allocate depth: 32 00:09:59.017 # threads/core: 1 00:09:59.017 Run time: 1 seconds 00:09:59.017 Verify: Yes 00:09:59.017 00:09:59.017 Running for 1 seconds... 00:09:59.017 00:09:59.017 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:59.017 ------------------------------------------------------------------------------------ 00:09:59.017 0,0 499840/s 1952 MiB/s 0 0 00:09:59.017 ==================================================================================== 00:09:59.017 Total 499840/s 1952 MiB/s 0 0' 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:59.017 10:05:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:59.017 10:05:12 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.017 10:05:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.017 10:05:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.017 10:05:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.017 10:05:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.017 10:05:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.017 10:05:12 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.017 10:05:12 -- accel/accel.sh@42 -- # jq -r . 00:09:59.017 [2024-04-24 10:05:12.053842] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:59.017 [2024-04-24 10:05:12.053900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159545 ] 00:09:59.017 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.017 [2024-04-24 10:05:12.107435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.017 [2024-04-24 10:05:12.175385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val=0x1 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val=dualcast 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val=software 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@23 -- # accel_module=software 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val=32 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val=32 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val=1 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val=Yes 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:09:59.017 10:05:12 -- accel/accel.sh@21 -- # val= 00:09:59.017 10:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # IFS=: 00:09:59.017 10:05:12 -- accel/accel.sh@20 -- # read -r var val 00:10:00.396 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:00.396 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:00.396 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:00.396 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:00.396 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:00.396 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:00.396 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:00.396 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:00.396 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:00.396 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:00.396 10:05:13 -- accel/accel.sh@21 -- # val= 00:10:00.396 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:10:00.396 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:10:00.396 10:05:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:00.396 10:05:13 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:00.396 10:05:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:00.396 00:10:00.396 real 0m2.688s 00:10:00.396 user 0m2.482s 00:10:00.396 sys 0m0.214s 00:10:00.396 10:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.396 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:10:00.396 ************************************ 00:10:00.396 END TEST accel_dualcast 00:10:00.396 ************************************ 00:10:00.396 10:05:13 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:00.396 10:05:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:00.396 10:05:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:00.396 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:10:00.396 ************************************ 00:10:00.396 START TEST accel_compare 00:10:00.396 ************************************ 00:10:00.396 10:05:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:10:00.397 10:05:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:00.397 10:05:13 -- accel/accel.sh@17 -- # local accel_module 00:10:00.397 10:05:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:00.397 10:05:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:00.397 10:05:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:00.397 10:05:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:00.397 10:05:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.397 10:05:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.397 10:05:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:00.397 10:05:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:00.397 10:05:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:00.397 10:05:13 -- accel/accel.sh@42 -- # jq -r . 00:10:00.397 [2024-04-24 10:05:13.445058] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:00.397 [2024-04-24 10:05:13.445300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159794 ] 00:10:00.397 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.397 [2024-04-24 10:05:13.501079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.397 [2024-04-24 10:05:13.570626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.769 10:05:14 -- accel/accel.sh@18 -- # out=' 00:10:01.769 SPDK Configuration: 00:10:01.769 Core mask: 0x1 00:10:01.769 00:10:01.769 Accel Perf Configuration: 00:10:01.769 Workload Type: compare 00:10:01.769 Transfer size: 4096 bytes 00:10:01.769 Vector count 1 00:10:01.769 Module: software 00:10:01.769 Queue depth: 32 00:10:01.769 Allocate depth: 32 00:10:01.769 # threads/core: 1 00:10:01.769 Run time: 1 seconds 00:10:01.769 Verify: Yes 00:10:01.769 00:10:01.769 Running for 1 seconds... 00:10:01.769 00:10:01.769 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:01.769 ------------------------------------------------------------------------------------ 00:10:01.769 0,0 607296/s 2372 MiB/s 0 0 00:10:01.769 ==================================================================================== 00:10:01.769 Total 607296/s 2372 MiB/s 0 0' 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:01.770 10:05:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:01.770 10:05:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.770 10:05:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.770 10:05:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.770 10:05:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.770 10:05:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.770 10:05:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.770 10:05:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.770 10:05:14 -- accel/accel.sh@42 -- # jq -r . 00:10:01.770 [2024-04-24 10:05:14.794509] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:01.770 [2024-04-24 10:05:14.794577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160029 ] 00:10:01.770 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.770 [2024-04-24 10:05:14.850605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.770 [2024-04-24 10:05:14.920660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val= 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val= 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val=0x1 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val= 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val= 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val=compare 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val= 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val=software 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@23 -- # accel_module=software 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val=32 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val=32 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val=1 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val=Yes 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val= 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:01.770 10:05:14 -- accel/accel.sh@21 -- # val= 00:10:01.770 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:10:01.770 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:10:03.146 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.146 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.146 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.146 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.146 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.146 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.146 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.146 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.146 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.146 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.146 10:05:16 -- accel/accel.sh@21 -- # val= 00:10:03.146 10:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # IFS=: 00:10:03.146 10:05:16 -- accel/accel.sh@20 -- # read -r var val 00:10:03.146 10:05:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:03.146 10:05:16 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:03.146 10:05:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:03.146 00:10:03.146 real 0m2.707s 00:10:03.146 user 0m2.484s 00:10:03.146 sys 0m0.231s 00:10:03.146 10:05:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.146 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:10:03.146 ************************************ 00:10:03.146 END TEST accel_compare 00:10:03.146 ************************************ 00:10:03.146 10:05:16 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:03.146 10:05:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:03.146 10:05:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:03.146 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:10:03.146 ************************************ 00:10:03.146 START TEST accel_xor 00:10:03.146 ************************************ 00:10:03.146 10:05:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:10:03.146 10:05:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:03.146 10:05:16 -- accel/accel.sh@17 -- # local accel_module 00:10:03.146 10:05:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:03.146 10:05:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:03.146 10:05:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.146 10:05:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:03.146 10:05:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.146 10:05:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.146 10:05:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:03.146 10:05:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:03.146 10:05:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:03.146 10:05:16 -- accel/accel.sh@42 -- # jq -r . 00:10:03.146 [2024-04-24 10:05:16.189644] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:03.146 [2024-04-24 10:05:16.189714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160284 ] 00:10:03.146 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.146 [2024-04-24 10:05:16.246285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.146 [2024-04-24 10:05:16.315543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.520 10:05:17 -- accel/accel.sh@18 -- # out=' 00:10:04.520 SPDK Configuration: 00:10:04.520 Core mask: 0x1 00:10:04.520 00:10:04.520 Accel Perf Configuration: 00:10:04.520 Workload Type: xor 00:10:04.520 Source buffers: 2 00:10:04.520 Transfer size: 4096 bytes 00:10:04.520 Vector count 1 00:10:04.520 Module: software 00:10:04.520 Queue depth: 32 00:10:04.520 Allocate depth: 32 00:10:04.520 # threads/core: 1 00:10:04.520 Run time: 1 seconds 00:10:04.520 Verify: Yes 00:10:04.521 00:10:04.521 Running for 1 seconds... 00:10:04.521 00:10:04.521 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:04.521 ------------------------------------------------------------------------------------ 00:10:04.521 0,0 474752/s 1854 MiB/s 0 0 00:10:04.521 ==================================================================================== 00:10:04.521 Total 474752/s 1854 MiB/s 0 0' 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:04.521 10:05:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:04.521 10:05:17 -- accel/accel.sh@12 -- # build_accel_config 00:10:04.521 10:05:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:04.521 10:05:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.521 10:05:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.521 10:05:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:04.521 10:05:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:04.521 10:05:17 -- accel/accel.sh@41 -- # local IFS=, 00:10:04.521 10:05:17 -- accel/accel.sh@42 -- # jq -r . 00:10:04.521 [2024-04-24 10:05:17.539130] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:04.521 [2024-04-24 10:05:17.539188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160516 ] 00:10:04.521 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.521 [2024-04-24 10:05:17.593024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.521 [2024-04-24 10:05:17.660882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val= 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val= 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val=0x1 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val= 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val= 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val=xor 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val=2 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val= 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val=software 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@23 -- # accel_module=software 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val=32 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val=32 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val=1 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val=Yes 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val= 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:04.521 10:05:17 -- accel/accel.sh@21 -- # val= 00:10:04.521 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:10:04.521 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:10:05.898 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.898 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.898 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.898 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.898 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.898 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.898 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.898 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.898 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.898 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.898 10:05:18 -- accel/accel.sh@21 -- # val= 00:10:05.898 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:10:05.898 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:10:05.898 10:05:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:05.898 10:05:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:05.898 10:05:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:05.898 00:10:05.898 real 0m2.703s 00:10:05.898 user 0m2.493s 00:10:05.898 sys 0m0.220s 00:10:05.898 10:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.898 10:05:18 -- common/autotest_common.sh@10 -- # set +x 00:10:05.898 ************************************ 00:10:05.898 END TEST accel_xor 00:10:05.898 ************************************ 00:10:05.898 10:05:18 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:05.898 10:05:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:05.898 10:05:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:05.898 10:05:18 -- common/autotest_common.sh@10 -- # set +x 00:10:05.898 ************************************ 00:10:05.898 START TEST accel_xor 00:10:05.898 ************************************ 00:10:05.898 10:05:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:10:05.898 10:05:18 -- accel/accel.sh@16 -- # local accel_opc 00:10:05.898 10:05:18 -- accel/accel.sh@17 -- # local accel_module 00:10:05.898 10:05:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:05.898 10:05:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:05.898 10:05:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:05.898 10:05:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:05.898 10:05:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.898 10:05:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.898 10:05:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:05.898 10:05:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:05.899 10:05:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:05.899 10:05:18 -- accel/accel.sh@42 -- # jq -r . 00:10:05.899 [2024-04-24 10:05:18.930354] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:05.899 [2024-04-24 10:05:18.930434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160771 ] 00:10:05.899 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.899 [2024-04-24 10:05:18.984728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.899 [2024-04-24 10:05:19.055483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.277 10:05:20 -- accel/accel.sh@18 -- # out=' 00:10:07.277 SPDK Configuration: 00:10:07.277 Core mask: 0x1 00:10:07.277 00:10:07.277 Accel Perf Configuration: 00:10:07.277 Workload Type: xor 00:10:07.277 Source buffers: 3 00:10:07.277 Transfer size: 4096 bytes 00:10:07.277 Vector count 1 00:10:07.277 Module: software 00:10:07.277 Queue depth: 32 00:10:07.277 Allocate depth: 32 00:10:07.277 # threads/core: 1 00:10:07.277 Run time: 1 seconds 00:10:07.277 Verify: Yes 00:10:07.277 00:10:07.277 Running for 1 seconds... 00:10:07.277 00:10:07.277 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:07.277 ------------------------------------------------------------------------------------ 00:10:07.277 0,0 456032/s 1781 MiB/s 0 0 00:10:07.277 ==================================================================================== 00:10:07.277 Total 456032/s 1781 MiB/s 0 0' 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:07.277 10:05:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:07.277 10:05:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:07.277 10:05:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:07.277 10:05:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.277 10:05:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.277 10:05:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:07.277 10:05:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:07.277 10:05:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:07.277 10:05:20 -- accel/accel.sh@42 -- # jq -r . 00:10:07.277 [2024-04-24 10:05:20.282145] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:07.277 [2024-04-24 10:05:20.282225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161010 ] 00:10:07.277 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.277 [2024-04-24 10:05:20.337979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.277 [2024-04-24 10:05:20.407226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val= 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val= 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val=0x1 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val= 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val= 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val=xor 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val=3 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val= 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val=software 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@23 -- # accel_module=software 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val=32 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val=32 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val=1 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val=Yes 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val= 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:07.277 10:05:20 -- accel/accel.sh@21 -- # val= 00:10:07.277 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:10:07.277 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:10:08.654 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.654 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.654 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.654 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.654 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.654 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.654 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.654 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.654 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.654 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.654 10:05:21 -- accel/accel.sh@21 -- # val= 00:10:08.654 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:10:08.654 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:10:08.654 10:05:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:08.654 10:05:21 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:08.654 10:05:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:08.654 00:10:08.654 real 0m2.707s 00:10:08.654 user 0m2.476s 00:10:08.654 sys 0m0.240s 00:10:08.654 10:05:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.654 10:05:21 -- common/autotest_common.sh@10 -- # set +x 00:10:08.654 ************************************ 00:10:08.654 END TEST accel_xor 00:10:08.654 ************************************ 00:10:08.654 10:05:21 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:08.654 10:05:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:08.654 10:05:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:08.654 10:05:21 -- common/autotest_common.sh@10 -- # set +x 00:10:08.654 ************************************ 00:10:08.654 START TEST accel_dif_verify 00:10:08.654 ************************************ 00:10:08.654 10:05:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:10:08.654 10:05:21 -- accel/accel.sh@16 -- # local accel_opc 00:10:08.654 10:05:21 -- accel/accel.sh@17 -- # local accel_module 00:10:08.654 10:05:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:08.654 10:05:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:08.654 10:05:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.654 10:05:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.654 10:05:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.654 10:05:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.654 10:05:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.654 10:05:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.654 10:05:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.654 10:05:21 -- accel/accel.sh@42 -- # jq -r . 00:10:08.654 [2024-04-24 10:05:21.674497] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:08.654 [2024-04-24 10:05:21.674568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161258 ] 00:10:08.654 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.654 [2024-04-24 10:05:21.730776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.654 [2024-04-24 10:05:21.799741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.032 10:05:23 -- accel/accel.sh@18 -- # out=' 00:10:10.032 SPDK Configuration: 00:10:10.032 Core mask: 0x1 00:10:10.032 00:10:10.032 Accel Perf Configuration: 00:10:10.032 Workload Type: dif_verify 00:10:10.032 Vector size: 4096 bytes 00:10:10.032 Transfer size: 4096 bytes 00:10:10.032 Block size: 512 bytes 00:10:10.032 Metadata size: 8 bytes 00:10:10.032 Vector count 1 00:10:10.032 Module: software 00:10:10.032 Queue depth: 32 00:10:10.032 Allocate depth: 32 00:10:10.032 # threads/core: 1 00:10:10.032 Run time: 1 seconds 00:10:10.032 Verify: No 00:10:10.032 00:10:10.032 Running for 1 seconds... 00:10:10.032 00:10:10.032 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:10.032 ------------------------------------------------------------------------------------ 00:10:10.032 0,0 128288/s 508 MiB/s 0 0 00:10:10.032 ==================================================================================== 00:10:10.032 Total 128288/s 501 MiB/s 0 0' 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:10.032 10:05:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:10.032 10:05:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:10.032 10:05:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:10.032 10:05:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.032 10:05:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.032 10:05:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:10.032 10:05:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:10.032 10:05:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:10.032 10:05:23 -- accel/accel.sh@42 -- # jq -r . 00:10:10.032 [2024-04-24 10:05:23.026483] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:10.032 [2024-04-24 10:05:23.026547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161498 ] 00:10:10.032 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.032 [2024-04-24 10:05:23.081281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.032 [2024-04-24 10:05:23.149866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val=0x1 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val=dif_verify 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.032 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.032 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.032 10:05:23 -- accel/accel.sh@21 -- # val=software 00:10:10.033 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.033 10:05:23 -- accel/accel.sh@23 -- # accel_module=software 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.033 10:05:23 -- accel/accel.sh@21 -- # val=32 00:10:10.033 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.033 10:05:23 -- accel/accel.sh@21 -- # val=32 00:10:10.033 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.033 10:05:23 -- accel/accel.sh@21 -- # val=1 00:10:10.033 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.033 10:05:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:10.033 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.033 10:05:23 -- accel/accel.sh@21 -- # val=No 00:10:10.033 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.033 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.033 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:10.033 10:05:23 -- accel/accel.sh@21 -- # val= 00:10:10.033 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:10:10.033 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:10:11.410 10:05:24 -- accel/accel.sh@21 -- # val= 00:10:11.410 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:11.410 10:05:24 -- accel/accel.sh@21 -- # val= 00:10:11.410 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:11.410 10:05:24 -- accel/accel.sh@21 -- # val= 00:10:11.410 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:11.410 10:05:24 -- accel/accel.sh@21 -- # val= 00:10:11.410 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:11.410 10:05:24 -- accel/accel.sh@21 -- # val= 00:10:11.410 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:11.410 10:05:24 -- accel/accel.sh@21 -- # val= 00:10:11.410 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:10:11.410 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:10:11.410 10:05:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:11.410 10:05:24 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:11.410 10:05:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:11.410 00:10:11.410 real 0m2.706s 00:10:11.410 user 0m2.497s 00:10:11.410 sys 0m0.218s 00:10:11.410 10:05:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.410 10:05:24 -- common/autotest_common.sh@10 -- # set +x 00:10:11.410 ************************************ 00:10:11.410 END TEST accel_dif_verify 00:10:11.410 ************************************ 00:10:11.410 10:05:24 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:11.410 10:05:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:11.410 10:05:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.410 10:05:24 -- common/autotest_common.sh@10 -- # set +x 00:10:11.410 ************************************ 00:10:11.410 START TEST accel_dif_generate 00:10:11.410 ************************************ 00:10:11.410 10:05:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:10:11.410 10:05:24 -- accel/accel.sh@16 -- # local accel_opc 00:10:11.410 10:05:24 -- accel/accel.sh@17 -- # local accel_module 00:10:11.410 10:05:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:11.410 10:05:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:11.410 10:05:24 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.410 10:05:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.410 10:05:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.410 10:05:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.410 10:05:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.410 10:05:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.410 10:05:24 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.410 10:05:24 -- accel/accel.sh@42 -- # jq -r . 00:10:11.410 [2024-04-24 10:05:24.418793] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:11.410 [2024-04-24 10:05:24.418853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161750 ] 00:10:11.410 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.410 [2024-04-24 10:05:24.472020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.410 [2024-04-24 10:05:24.542103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.786 10:05:25 -- accel/accel.sh@18 -- # out=' 00:10:12.786 SPDK Configuration: 00:10:12.786 Core mask: 0x1 00:10:12.786 00:10:12.786 Accel Perf Configuration: 00:10:12.786 Workload Type: dif_generate 00:10:12.786 Vector size: 4096 bytes 00:10:12.786 Transfer size: 4096 bytes 00:10:12.786 Block size: 512 bytes 00:10:12.786 Metadata size: 8 bytes 00:10:12.786 Vector count 1 00:10:12.786 Module: software 00:10:12.786 Queue depth: 32 00:10:12.786 Allocate depth: 32 00:10:12.786 # threads/core: 1 00:10:12.786 Run time: 1 seconds 00:10:12.786 Verify: No 00:10:12.786 00:10:12.786 Running for 1 seconds... 00:10:12.786 00:10:12.786 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:12.786 ------------------------------------------------------------------------------------ 00:10:12.786 0,0 157984/s 626 MiB/s 0 0 00:10:12.786 ==================================================================================== 00:10:12.786 Total 157984/s 617 MiB/s 0 0' 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.786 10:05:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:12.786 10:05:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:12.786 10:05:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:12.786 10:05:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:12.786 10:05:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.786 10:05:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.786 10:05:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:12.786 10:05:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:12.786 10:05:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:12.786 10:05:25 -- accel/accel.sh@42 -- # jq -r . 00:10:12.786 [2024-04-24 10:05:25.767216] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:12.786 [2024-04-24 10:05:25.767292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161984 ] 00:10:12.786 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.786 [2024-04-24 10:05:25.822365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.786 [2024-04-24 10:05:25.890026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.786 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.786 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.786 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.786 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.786 10:05:25 -- accel/accel.sh@21 -- # val=0x1 00:10:12.786 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.786 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.786 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.786 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.786 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.786 10:05:25 -- accel/accel.sh@21 -- # val=dif_generate 00:10:12.786 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.786 10:05:25 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.786 10:05:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:12.786 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.786 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val=software 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@23 -- # accel_module=software 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val=32 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val=32 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val=1 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val=No 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:12.787 10:05:25 -- accel/accel.sh@21 -- # val= 00:10:12.787 10:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:10:12.787 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:10:14.161 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.161 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.161 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.161 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.161 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.161 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.161 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.161 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.161 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.161 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.161 10:05:27 -- accel/accel.sh@21 -- # val= 00:10:14.161 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:10:14.161 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:10:14.161 10:05:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:14.161 10:05:27 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:14.161 10:05:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:14.161 00:10:14.161 real 0m2.701s 00:10:14.161 user 0m2.484s 00:10:14.161 sys 0m0.227s 00:10:14.161 10:05:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.161 10:05:27 -- common/autotest_common.sh@10 -- # set +x 00:10:14.161 ************************************ 00:10:14.161 END TEST accel_dif_generate 00:10:14.161 ************************************ 00:10:14.161 10:05:27 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:14.161 10:05:27 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:14.161 10:05:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:14.161 10:05:27 -- common/autotest_common.sh@10 -- # set +x 00:10:14.161 ************************************ 00:10:14.161 START TEST accel_dif_generate_copy 00:10:14.161 ************************************ 00:10:14.161 10:05:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:14.161 10:05:27 -- accel/accel.sh@16 -- # local accel_opc 00:10:14.161 10:05:27 -- accel/accel.sh@17 -- # local accel_module 00:10:14.161 10:05:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:14.161 10:05:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:14.161 10:05:27 -- accel/accel.sh@12 -- # build_accel_config 00:10:14.161 10:05:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:14.161 10:05:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.161 10:05:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.161 10:05:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:14.161 10:05:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:14.161 10:05:27 -- accel/accel.sh@41 -- # local IFS=, 00:10:14.161 10:05:27 -- accel/accel.sh@42 -- # jq -r . 00:10:14.161 [2024-04-24 10:05:27.161837] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:14.161 [2024-04-24 10:05:27.161916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162237 ] 00:10:14.161 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.161 [2024-04-24 10:05:27.217616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.161 [2024-04-24 10:05:27.285192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.538 10:05:28 -- accel/accel.sh@18 -- # out=' 00:10:15.538 SPDK Configuration: 00:10:15.538 Core mask: 0x1 00:10:15.538 00:10:15.538 Accel Perf Configuration: 00:10:15.538 Workload Type: dif_generate_copy 00:10:15.538 Vector size: 4096 bytes 00:10:15.538 Transfer size: 4096 bytes 00:10:15.538 Vector count 1 00:10:15.538 Module: software 00:10:15.538 Queue depth: 32 00:10:15.538 Allocate depth: 32 00:10:15.538 # threads/core: 1 00:10:15.538 Run time: 1 seconds 00:10:15.538 Verify: No 00:10:15.538 00:10:15.538 Running for 1 seconds... 00:10:15.538 00:10:15.538 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:15.538 ------------------------------------------------------------------------------------ 00:10:15.538 0,0 121824/s 483 MiB/s 0 0 00:10:15.538 ==================================================================================== 00:10:15.538 Total 121824/s 475 MiB/s 0 0' 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:15.538 10:05:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:15.538 10:05:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:15.538 10:05:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:15.538 10:05:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.538 10:05:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.538 10:05:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:15.538 10:05:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:15.538 10:05:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:15.538 10:05:28 -- accel/accel.sh@42 -- # jq -r . 00:10:15.538 [2024-04-24 10:05:28.510946] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:15.538 [2024-04-24 10:05:28.511015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162471 ] 00:10:15.538 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.538 [2024-04-24 10:05:28.568153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.538 [2024-04-24 10:05:28.637271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val= 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val= 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val=0x1 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val= 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val= 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val= 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.538 10:05:28 -- accel/accel.sh@21 -- # val=software 00:10:15.538 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.538 10:05:28 -- accel/accel.sh@23 -- # accel_module=software 00:10:15.538 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.539 10:05:28 -- accel/accel.sh@21 -- # val=32 00:10:15.539 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.539 10:05:28 -- accel/accel.sh@21 -- # val=32 00:10:15.539 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.539 10:05:28 -- accel/accel.sh@21 -- # val=1 00:10:15.539 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.539 10:05:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:15.539 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.539 10:05:28 -- accel/accel.sh@21 -- # val=No 00:10:15.539 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.539 10:05:28 -- accel/accel.sh@21 -- # val= 00:10:15.539 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:15.539 10:05:28 -- accel/accel.sh@21 -- # val= 00:10:15.539 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:10:15.539 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:10:16.914 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.914 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.914 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.914 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.914 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.914 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.914 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.914 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.914 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.914 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.914 10:05:29 -- accel/accel.sh@21 -- # val= 00:10:16.914 10:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # IFS=: 00:10:16.914 10:05:29 -- accel/accel.sh@20 -- # read -r var val 00:10:16.914 10:05:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:16.914 10:05:29 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:16.914 10:05:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.914 00:10:16.914 real 0m2.708s 00:10:16.914 user 0m2.488s 00:10:16.914 sys 0m0.228s 00:10:16.914 10:05:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.914 10:05:29 -- common/autotest_common.sh@10 -- # set +x 00:10:16.914 ************************************ 00:10:16.914 END TEST accel_dif_generate_copy 00:10:16.914 ************************************ 00:10:16.914 10:05:29 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:16.914 10:05:29 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:16.914 10:05:29 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:16.914 10:05:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.914 10:05:29 -- common/autotest_common.sh@10 -- # set +x 00:10:16.914 ************************************ 00:10:16.914 START TEST accel_comp 00:10:16.914 ************************************ 00:10:16.914 10:05:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:16.914 10:05:29 -- accel/accel.sh@16 -- # local accel_opc 00:10:16.914 10:05:29 -- accel/accel.sh@17 -- # local accel_module 00:10:16.914 10:05:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:16.914 10:05:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:16.914 10:05:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.914 10:05:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.914 10:05:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.914 10:05:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.914 10:05:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.914 10:05:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.914 10:05:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.914 10:05:29 -- accel/accel.sh@42 -- # jq -r . 00:10:16.914 [2024-04-24 10:05:29.908618] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:16.914 [2024-04-24 10:05:29.908694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162721 ] 00:10:16.914 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.914 [2024-04-24 10:05:29.964142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.914 [2024-04-24 10:05:30.038127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.289 10:05:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:18.289 00:10:18.290 SPDK Configuration: 00:10:18.290 Core mask: 0x1 00:10:18.290 00:10:18.290 Accel Perf Configuration: 00:10:18.290 Workload Type: compress 00:10:18.290 Transfer size: 4096 bytes 00:10:18.290 Vector count 1 00:10:18.290 Module: software 00:10:18.290 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:18.290 Queue depth: 32 00:10:18.290 Allocate depth: 32 00:10:18.290 # threads/core: 1 00:10:18.290 Run time: 1 seconds 00:10:18.290 Verify: No 00:10:18.290 00:10:18.290 Running for 1 seconds... 00:10:18.290 00:10:18.290 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:18.290 ------------------------------------------------------------------------------------ 00:10:18.290 0,0 60928/s 253 MiB/s 0 0 00:10:18.290 ==================================================================================== 00:10:18.290 Total 60928/s 238 MiB/s 0 0' 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:18.290 10:05:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:18.290 10:05:31 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.290 10:05:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.290 10:05:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.290 10:05:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.290 10:05:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.290 10:05:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.290 10:05:31 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.290 10:05:31 -- accel/accel.sh@42 -- # jq -r . 00:10:18.290 [2024-04-24 10:05:31.265443] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:18.290 [2024-04-24 10:05:31.265500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162961 ] 00:10:18.290 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.290 [2024-04-24 10:05:31.319105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.290 [2024-04-24 10:05:31.386911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val= 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val= 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val= 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val=0x1 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val= 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val= 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val=compress 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val= 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val=software 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@23 -- # accel_module=software 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val=32 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val=32 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val=1 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val=No 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val= 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:18.290 10:05:31 -- accel/accel.sh@21 -- # val= 00:10:18.290 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:10:18.290 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:10:19.673 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:19.673 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:19.673 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:19.673 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:19.673 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:19.673 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:19.673 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:19.673 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:19.673 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:19.673 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:19.673 10:05:32 -- accel/accel.sh@21 -- # val= 00:10:19.673 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:10:19.673 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:10:19.673 10:05:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:19.673 10:05:32 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:19.673 10:05:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:19.673 00:10:19.673 real 0m2.713s 00:10:19.673 user 0m2.497s 00:10:19.673 sys 0m0.226s 00:10:19.673 10:05:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.673 10:05:32 -- common/autotest_common.sh@10 -- # set +x 00:10:19.673 ************************************ 00:10:19.673 END TEST accel_comp 00:10:19.673 ************************************ 00:10:19.673 10:05:32 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:19.673 10:05:32 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:19.673 10:05:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.673 10:05:32 -- common/autotest_common.sh@10 -- # set +x 00:10:19.673 ************************************ 00:10:19.673 START TEST accel_decomp 00:10:19.673 ************************************ 00:10:19.673 10:05:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:19.673 10:05:32 -- accel/accel.sh@16 -- # local accel_opc 00:10:19.673 10:05:32 -- accel/accel.sh@17 -- # local accel_module 00:10:19.673 10:05:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:19.673 10:05:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:19.673 10:05:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.673 10:05:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:19.673 10:05:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.673 10:05:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.673 10:05:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:19.673 10:05:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:19.673 10:05:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:19.673 10:05:32 -- accel/accel.sh@42 -- # jq -r . 00:10:19.673 [2024-04-24 10:05:32.658996] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:19.673 [2024-04-24 10:05:32.659080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163212 ] 00:10:19.673 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.673 [2024-04-24 10:05:32.713662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.673 [2024-04-24 10:05:32.783713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.049 10:05:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:21.049 00:10:21.049 SPDK Configuration: 00:10:21.049 Core mask: 0x1 00:10:21.049 00:10:21.049 Accel Perf Configuration: 00:10:21.049 Workload Type: decompress 00:10:21.049 Transfer size: 4096 bytes 00:10:21.049 Vector count 1 00:10:21.049 Module: software 00:10:21.049 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:21.049 Queue depth: 32 00:10:21.049 Allocate depth: 32 00:10:21.049 # threads/core: 1 00:10:21.049 Run time: 1 seconds 00:10:21.049 Verify: Yes 00:10:21.049 00:10:21.049 Running for 1 seconds... 00:10:21.049 00:10:21.049 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:21.049 ------------------------------------------------------------------------------------ 00:10:21.049 0,0 72896/s 134 MiB/s 0 0 00:10:21.049 ==================================================================================== 00:10:21.049 Total 72896/s 284 MiB/s 0 0' 00:10:21.049 10:05:33 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:33 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:21.049 10:05:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:21.049 10:05:33 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.049 10:05:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.049 10:05:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.049 10:05:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.049 10:05:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.049 10:05:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.049 10:05:33 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.049 10:05:33 -- accel/accel.sh@42 -- # jq -r . 00:10:21.049 [2024-04-24 10:05:34.012691] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:21.049 [2024-04-24 10:05:34.012766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163449 ] 00:10:21.049 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.049 [2024-04-24 10:05:34.067666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.049 [2024-04-24 10:05:34.137376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val=0x1 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val=decompress 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val=software 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@23 -- # accel_module=software 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val=32 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val=32 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val=1 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val=Yes 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:21.049 10:05:34 -- accel/accel.sh@21 -- # val= 00:10:21.049 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:10:21.049 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:10:22.423 10:05:35 -- accel/accel.sh@21 -- # val= 00:10:22.423 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:10:22.423 10:05:35 -- accel/accel.sh@21 -- # val= 00:10:22.423 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:10:22.423 10:05:35 -- accel/accel.sh@21 -- # val= 00:10:22.423 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:10:22.423 10:05:35 -- accel/accel.sh@21 -- # val= 00:10:22.423 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:10:22.423 10:05:35 -- accel/accel.sh@21 -- # val= 00:10:22.423 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:10:22.423 10:05:35 -- accel/accel.sh@21 -- # val= 00:10:22.423 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:10:22.423 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:10:22.423 10:05:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:22.423 10:05:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:22.423 10:05:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:22.423 00:10:22.423 real 0m2.711s 00:10:22.423 user 0m2.490s 00:10:22.423 sys 0m0.230s 00:10:22.423 10:05:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.423 10:05:35 -- common/autotest_common.sh@10 -- # set +x 00:10:22.423 ************************************ 00:10:22.424 END TEST accel_decomp 00:10:22.424 ************************************ 00:10:22.424 10:05:35 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:22.424 10:05:35 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:22.424 10:05:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:22.424 10:05:35 -- common/autotest_common.sh@10 -- # set +x 00:10:22.424 ************************************ 00:10:22.424 START TEST accel_decmop_full 00:10:22.424 ************************************ 00:10:22.424 10:05:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:22.424 10:05:35 -- accel/accel.sh@16 -- # local accel_opc 00:10:22.424 10:05:35 -- accel/accel.sh@17 -- # local accel_module 00:10:22.424 10:05:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:22.424 10:05:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:22.424 10:05:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:22.424 10:05:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:22.424 10:05:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.424 10:05:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.424 10:05:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:22.424 10:05:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:22.424 10:05:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:22.424 10:05:35 -- accel/accel.sh@42 -- # jq -r . 00:10:22.424 [2024-04-24 10:05:35.407499] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:22.424 [2024-04-24 10:05:35.407575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163703 ] 00:10:22.424 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.424 [2024-04-24 10:05:35.461859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.424 [2024-04-24 10:05:35.531748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.799 10:05:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:23.799 00:10:23.799 SPDK Configuration: 00:10:23.799 Core mask: 0x1 00:10:23.799 00:10:23.799 Accel Perf Configuration: 00:10:23.799 Workload Type: decompress 00:10:23.799 Transfer size: 111250 bytes 00:10:23.799 Vector count 1 00:10:23.799 Module: software 00:10:23.799 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:23.799 Queue depth: 32 00:10:23.799 Allocate depth: 32 00:10:23.799 # threads/core: 1 00:10:23.799 Run time: 1 seconds 00:10:23.799 Verify: Yes 00:10:23.799 00:10:23.799 Running for 1 seconds... 00:10:23.799 00:10:23.799 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:23.799 ------------------------------------------------------------------------------------ 00:10:23.799 0,0 4832/s 199 MiB/s 0 0 00:10:23.799 ==================================================================================== 00:10:23.799 Total 4832/s 512 MiB/s 0 0' 00:10:23.799 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.799 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.799 10:05:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:23.799 10:05:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:23.799 10:05:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.799 10:05:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:23.799 10:05:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.799 10:05:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.799 10:05:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:23.799 10:05:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:23.799 10:05:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:23.799 10:05:36 -- accel/accel.sh@42 -- # jq -r . 00:10:23.800 [2024-04-24 10:05:36.769388] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:23.800 [2024-04-24 10:05:36.769467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163935 ] 00:10:23.800 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.800 [2024-04-24 10:05:36.826133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.800 [2024-04-24 10:05:36.894377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val=0x1 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val=decompress 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val=software 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@23 -- # accel_module=software 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val=32 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val=32 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val=1 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val=Yes 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:23.800 10:05:36 -- accel/accel.sh@21 -- # val= 00:10:23.800 10:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # IFS=: 00:10:23.800 10:05:36 -- accel/accel.sh@20 -- # read -r var val 00:10:25.252 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.252 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.252 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.252 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.252 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.252 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.252 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.252 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.252 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.252 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.252 10:05:38 -- accel/accel.sh@21 -- # val= 00:10:25.252 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:10:25.252 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:10:25.252 10:05:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:25.252 10:05:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:25.252 10:05:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:25.252 00:10:25.252 real 0m2.734s 00:10:25.252 user 0m2.516s 00:10:25.252 sys 0m0.226s 00:10:25.252 10:05:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.252 10:05:38 -- common/autotest_common.sh@10 -- # set +x 00:10:25.252 ************************************ 00:10:25.252 END TEST accel_decmop_full 00:10:25.252 ************************************ 00:10:25.252 10:05:38 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:25.252 10:05:38 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:25.252 10:05:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.252 10:05:38 -- common/autotest_common.sh@10 -- # set +x 00:10:25.252 ************************************ 00:10:25.252 START TEST accel_decomp_mcore 00:10:25.253 ************************************ 00:10:25.253 10:05:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:25.253 10:05:38 -- accel/accel.sh@16 -- # local accel_opc 00:10:25.253 10:05:38 -- accel/accel.sh@17 -- # local accel_module 00:10:25.253 10:05:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:25.253 10:05:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:25.253 10:05:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.253 10:05:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.253 10:05:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.253 10:05:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.253 10:05:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.253 10:05:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.253 10:05:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.253 10:05:38 -- accel/accel.sh@42 -- # jq -r . 00:10:25.253 [2024-04-24 10:05:38.180017] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:25.253 [2024-04-24 10:05:38.180098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164187 ] 00:10:25.253 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.253 [2024-04-24 10:05:38.235078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.253 [2024-04-24 10:05:38.312394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.253 [2024-04-24 10:05:38.312491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.253 [2024-04-24 10:05:38.312731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.253 [2024-04-24 10:05:38.312734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.629 10:05:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:26.629 00:10:26.629 SPDK Configuration: 00:10:26.629 Core mask: 0xf 00:10:26.629 00:10:26.629 Accel Perf Configuration: 00:10:26.629 Workload Type: decompress 00:10:26.629 Transfer size: 4096 bytes 00:10:26.629 Vector count 1 00:10:26.629 Module: software 00:10:26.629 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:26.629 Queue depth: 32 00:10:26.629 Allocate depth: 32 00:10:26.629 # threads/core: 1 00:10:26.629 Run time: 1 seconds 00:10:26.629 Verify: Yes 00:10:26.629 00:10:26.629 Running for 1 seconds... 00:10:26.629 00:10:26.629 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:26.629 ------------------------------------------------------------------------------------ 00:10:26.629 0,0 59584/s 109 MiB/s 0 0 00:10:26.629 3,0 61600/s 113 MiB/s 0 0 00:10:26.629 2,0 61632/s 113 MiB/s 0 0 00:10:26.629 1,0 61376/s 113 MiB/s 0 0 00:10:26.629 ==================================================================================== 00:10:26.629 Total 244192/s 953 MiB/s 0 0' 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:26.629 10:05:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:26.629 10:05:39 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.629 10:05:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:26.629 10:05:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.629 10:05:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.629 10:05:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:26.629 10:05:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:26.629 10:05:39 -- accel/accel.sh@41 -- # local IFS=, 00:10:26.629 10:05:39 -- accel/accel.sh@42 -- # jq -r . 00:10:26.629 [2024-04-24 10:05:39.549907] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:26.629 [2024-04-24 10:05:39.549989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164431 ] 00:10:26.629 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.629 [2024-04-24 10:05:39.606422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.629 [2024-04-24 10:05:39.677607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.629 [2024-04-24 10:05:39.677708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.629 [2024-04-24 10:05:39.677784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.629 [2024-04-24 10:05:39.677785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val= 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val= 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val= 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val=0xf 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val= 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val= 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val=decompress 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val= 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val=software 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@23 -- # accel_module=software 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:26.629 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.629 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.629 10:05:39 -- accel/accel.sh@21 -- # val=32 00:10:26.630 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.630 10:05:39 -- accel/accel.sh@21 -- # val=32 00:10:26.630 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.630 10:05:39 -- accel/accel.sh@21 -- # val=1 00:10:26.630 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.630 10:05:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:26.630 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.630 10:05:39 -- accel/accel.sh@21 -- # val=Yes 00:10:26.630 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.630 10:05:39 -- accel/accel.sh@21 -- # val= 00:10:26.630 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:26.630 10:05:39 -- accel/accel.sh@21 -- # val= 00:10:26.630 10:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # IFS=: 00:10:26.630 10:05:39 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:28.006 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:28.006 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:28.006 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:28.006 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:28.006 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:28.006 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:28.006 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:28.006 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@21 -- # val= 00:10:28.006 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:10:28.006 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:10:28.006 10:05:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:28.006 10:05:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:28.006 10:05:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.006 00:10:28.006 real 0m2.741s 00:10:28.006 user 0m9.170s 00:10:28.006 sys 0m0.240s 00:10:28.006 10:05:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.006 10:05:40 -- common/autotest_common.sh@10 -- # set +x 00:10:28.006 ************************************ 00:10:28.006 END TEST accel_decomp_mcore 00:10:28.006 ************************************ 00:10:28.006 10:05:40 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:28.006 10:05:40 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:28.006 10:05:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:28.006 10:05:40 -- common/autotest_common.sh@10 -- # set +x 00:10:28.006 ************************************ 00:10:28.006 START TEST accel_decomp_full_mcore 00:10:28.006 ************************************ 00:10:28.006 10:05:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:28.006 10:05:40 -- accel/accel.sh@16 -- # local accel_opc 00:10:28.006 10:05:40 -- accel/accel.sh@17 -- # local accel_module 00:10:28.006 10:05:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:28.006 10:05:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:28.006 10:05:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.006 10:05:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:28.006 10:05:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.006 10:05:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.006 10:05:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:28.006 10:05:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:28.006 10:05:40 -- accel/accel.sh@41 -- # local IFS=, 00:10:28.006 10:05:40 -- accel/accel.sh@42 -- # jq -r . 00:10:28.006 [2024-04-24 10:05:40.958770] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:28.006 [2024-04-24 10:05:40.958836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164683 ] 00:10:28.006 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.006 [2024-04-24 10:05:41.014000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.006 [2024-04-24 10:05:41.088590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.006 [2024-04-24 10:05:41.088685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.006 [2024-04-24 10:05:41.088764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.006 [2024-04-24 10:05:41.088766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.383 10:05:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:29.383 00:10:29.383 SPDK Configuration: 00:10:29.383 Core mask: 0xf 00:10:29.383 00:10:29.383 Accel Perf Configuration: 00:10:29.383 Workload Type: decompress 00:10:29.383 Transfer size: 111250 bytes 00:10:29.383 Vector count 1 00:10:29.383 Module: software 00:10:29.383 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:29.383 Queue depth: 32 00:10:29.383 Allocate depth: 32 00:10:29.383 # threads/core: 1 00:10:29.383 Run time: 1 seconds 00:10:29.383 Verify: Yes 00:10:29.383 00:10:29.383 Running for 1 seconds... 00:10:29.383 00:10:29.383 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:29.383 ------------------------------------------------------------------------------------ 00:10:29.383 0,0 4512/s 186 MiB/s 0 0 00:10:29.383 3,0 4672/s 192 MiB/s 0 0 00:10:29.383 2,0 4672/s 192 MiB/s 0 0 00:10:29.383 1,0 4672/s 192 MiB/s 0 0 00:10:29.383 ==================================================================================== 00:10:29.383 Total 18528/s 1965 MiB/s 0 0' 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:29.383 10:05:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:29.383 10:05:42 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.383 10:05:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.383 10:05:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.383 10:05:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.383 10:05:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.383 10:05:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.383 10:05:42 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.383 10:05:42 -- accel/accel.sh@42 -- # jq -r . 00:10:29.383 [2024-04-24 10:05:42.335713] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:29.383 [2024-04-24 10:05:42.335782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164918 ] 00:10:29.383 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.383 [2024-04-24 10:05:42.392579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.383 [2024-04-24 10:05:42.464916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.383 [2024-04-24 10:05:42.465002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.383 [2024-04-24 10:05:42.465106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.383 [2024-04-24 10:05:42.465108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val= 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val= 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val= 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val=0xf 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val= 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val= 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val=decompress 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val= 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val=software 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@23 -- # accel_module=software 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val=32 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val=32 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val=1 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val=Yes 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val= 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:29.383 10:05:42 -- accel/accel.sh@21 -- # val= 00:10:29.383 10:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:10:29.383 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.761 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.761 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.761 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.761 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.761 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.761 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.761 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.761 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@21 -- # val= 00:10:30.761 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:10:30.761 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:10:30.761 10:05:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:30.761 10:05:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:30.761 10:05:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.761 00:10:30.761 real 0m2.761s 00:10:30.761 user 0m9.261s 00:10:30.761 sys 0m0.240s 00:10:30.761 10:05:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.761 10:05:43 -- common/autotest_common.sh@10 -- # set +x 00:10:30.761 ************************************ 00:10:30.761 END TEST accel_decomp_full_mcore 00:10:30.761 ************************************ 00:10:30.761 10:05:43 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:30.761 10:05:43 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:30.761 10:05:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.761 10:05:43 -- common/autotest_common.sh@10 -- # set +x 00:10:30.761 ************************************ 00:10:30.761 START TEST accel_decomp_mthread 00:10:30.761 ************************************ 00:10:30.761 10:05:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:30.761 10:05:43 -- accel/accel.sh@16 -- # local accel_opc 00:10:30.761 10:05:43 -- accel/accel.sh@17 -- # local accel_module 00:10:30.761 10:05:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:30.761 10:05:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:30.761 10:05:43 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.761 10:05:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.761 10:05:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.761 10:05:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.761 10:05:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.761 10:05:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.761 10:05:43 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.761 10:05:43 -- accel/accel.sh@42 -- # jq -r . 00:10:30.761 [2024-04-24 10:05:43.758055] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:30.761 [2024-04-24 10:05:43.758155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165176 ] 00:10:30.761 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.761 [2024-04-24 10:05:43.814386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.761 [2024-04-24 10:05:43.884578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.137 10:05:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:32.137 00:10:32.137 SPDK Configuration: 00:10:32.137 Core mask: 0x1 00:10:32.137 00:10:32.137 Accel Perf Configuration: 00:10:32.137 Workload Type: decompress 00:10:32.137 Transfer size: 4096 bytes 00:10:32.137 Vector count 1 00:10:32.137 Module: software 00:10:32.137 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:32.137 Queue depth: 32 00:10:32.137 Allocate depth: 32 00:10:32.137 # threads/core: 2 00:10:32.137 Run time: 1 seconds 00:10:32.137 Verify: Yes 00:10:32.137 00:10:32.137 Running for 1 seconds... 00:10:32.137 00:10:32.137 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:32.137 ------------------------------------------------------------------------------------ 00:10:32.137 0,1 36928/s 68 MiB/s 0 0 00:10:32.137 0,0 36832/s 67 MiB/s 0 0 00:10:32.137 ==================================================================================== 00:10:32.137 Total 73760/s 288 MiB/s 0 0' 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.137 10:05:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:32.137 10:05:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:32.137 10:05:45 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.137 10:05:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.137 10:05:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.137 10:05:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.137 10:05:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.137 10:05:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.137 10:05:45 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.137 10:05:45 -- accel/accel.sh@42 -- # jq -r . 00:10:32.137 [2024-04-24 10:05:45.115911] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:32.137 [2024-04-24 10:05:45.115982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165415 ] 00:10:32.137 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.137 [2024-04-24 10:05:45.173934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.137 [2024-04-24 10:05:45.242925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.137 10:05:45 -- accel/accel.sh@21 -- # val= 00:10:32.137 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.137 10:05:45 -- accel/accel.sh@21 -- # val= 00:10:32.137 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.137 10:05:45 -- accel/accel.sh@21 -- # val= 00:10:32.137 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.137 10:05:45 -- accel/accel.sh@21 -- # val=0x1 00:10:32.137 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.137 10:05:45 -- accel/accel.sh@21 -- # val= 00:10:32.137 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.137 10:05:45 -- accel/accel.sh@21 -- # val= 00:10:32.137 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.137 10:05:45 -- accel/accel.sh@21 -- # val=decompress 00:10:32.137 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.137 10:05:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.137 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.137 10:05:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val= 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val=software 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@23 -- # accel_module=software 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val=32 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val=32 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val=2 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val=Yes 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val= 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:32.138 10:05:45 -- accel/accel.sh@21 -- # val= 00:10:32.138 10:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:10:32.138 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:10:33.514 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:33.514 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:33.514 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:33.514 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:33.514 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:33.514 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:33.514 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:33.514 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:33.514 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:33.514 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:33.514 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:33.514 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:33.514 10:05:46 -- accel/accel.sh@21 -- # val= 00:10:33.514 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:10:33.514 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:10:33.514 10:05:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:33.514 10:05:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:33.514 10:05:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:33.514 00:10:33.514 real 0m2.720s 00:10:33.514 user 0m2.509s 00:10:33.514 sys 0m0.221s 00:10:33.514 10:05:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.514 10:05:46 -- common/autotest_common.sh@10 -- # set +x 00:10:33.514 ************************************ 00:10:33.514 END TEST accel_decomp_mthread 00:10:33.514 ************************************ 00:10:33.514 10:05:46 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:33.514 10:05:46 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:33.514 10:05:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.514 10:05:46 -- common/autotest_common.sh@10 -- # set +x 00:10:33.514 ************************************ 00:10:33.514 START TEST accel_deomp_full_mthread 00:10:33.514 ************************************ 00:10:33.514 10:05:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:33.514 10:05:46 -- accel/accel.sh@16 -- # local accel_opc 00:10:33.514 10:05:46 -- accel/accel.sh@17 -- # local accel_module 00:10:33.514 10:05:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:33.514 10:05:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:33.514 10:05:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.514 10:05:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:33.514 10:05:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.514 10:05:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.514 10:05:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:33.514 10:05:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:33.514 10:05:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:33.514 10:05:46 -- accel/accel.sh@42 -- # jq -r . 00:10:33.514 [2024-04-24 10:05:46.516785] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:33.514 [2024-04-24 10:05:46.516861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165670 ] 00:10:33.514 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.514 [2024-04-24 10:05:46.572532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.514 [2024-04-24 10:05:46.642185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.891 10:05:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:34.891 00:10:34.891 SPDK Configuration: 00:10:34.891 Core mask: 0x1 00:10:34.891 00:10:34.891 Accel Perf Configuration: 00:10:34.891 Workload Type: decompress 00:10:34.891 Transfer size: 111250 bytes 00:10:34.891 Vector count 1 00:10:34.891 Module: software 00:10:34.891 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:34.891 Queue depth: 32 00:10:34.891 Allocate depth: 32 00:10:34.891 # threads/core: 2 00:10:34.891 Run time: 1 seconds 00:10:34.891 Verify: Yes 00:10:34.891 00:10:34.891 Running for 1 seconds... 00:10:34.891 00:10:34.891 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:34.891 ------------------------------------------------------------------------------------ 00:10:34.891 0,1 2528/s 104 MiB/s 0 0 00:10:34.891 0,0 2464/s 101 MiB/s 0 0 00:10:34.891 ==================================================================================== 00:10:34.891 Total 4992/s 529 MiB/s 0 0' 00:10:34.891 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:34.891 10:05:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:34.891 10:05:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.891 10:05:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.891 10:05:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.891 10:05:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.891 10:05:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.891 10:05:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.891 10:05:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.891 10:05:47 -- accel/accel.sh@42 -- # jq -r . 00:10:34.891 [2024-04-24 10:05:47.894300] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:34.891 [2024-04-24 10:05:47.894361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165903 ] 00:10:34.891 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.891 [2024-04-24 10:05:47.948374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.891 [2024-04-24 10:05:48.016686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val=0x1 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val=decompress 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.891 10:05:48 -- accel/accel.sh@21 -- # val=software 00:10:34.891 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.891 10:05:48 -- accel/accel.sh@23 -- # accel_module=software 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.891 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.892 10:05:48 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:34.892 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.892 10:05:48 -- accel/accel.sh@21 -- # val=32 00:10:34.892 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.892 10:05:48 -- accel/accel.sh@21 -- # val=32 00:10:34.892 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.892 10:05:48 -- accel/accel.sh@21 -- # val=2 00:10:34.892 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.892 10:05:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:34.892 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.892 10:05:48 -- accel/accel.sh@21 -- # val=Yes 00:10:34.892 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.892 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:34.892 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:34.892 10:05:48 -- accel/accel.sh@21 -- # val= 00:10:34.892 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:10:34.892 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:10:36.269 10:05:49 -- accel/accel.sh@21 -- # val= 00:10:36.269 10:05:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # IFS=: 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # read -r var val 00:10:36.269 10:05:49 -- accel/accel.sh@21 -- # val= 00:10:36.269 10:05:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # IFS=: 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # read -r var val 00:10:36.269 10:05:49 -- accel/accel.sh@21 -- # val= 00:10:36.269 10:05:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # IFS=: 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # read -r var val 00:10:36.269 10:05:49 -- accel/accel.sh@21 -- # val= 00:10:36.269 10:05:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # IFS=: 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # read -r var val 00:10:36.269 10:05:49 -- accel/accel.sh@21 -- # val= 00:10:36.269 10:05:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # IFS=: 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # read -r var val 00:10:36.269 10:05:49 -- accel/accel.sh@21 -- # val= 00:10:36.269 10:05:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # IFS=: 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # read -r var val 00:10:36.269 10:05:49 -- accel/accel.sh@21 -- # val= 00:10:36.269 10:05:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # IFS=: 00:10:36.269 10:05:49 -- accel/accel.sh@20 -- # read -r var val 00:10:36.269 10:05:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:36.269 10:05:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:36.269 10:05:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:36.269 00:10:36.269 real 0m2.762s 00:10:36.269 user 0m2.543s 00:10:36.269 sys 0m0.227s 00:10:36.269 10:05:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.269 10:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:36.269 ************************************ 00:10:36.269 END TEST accel_deomp_full_mthread 00:10:36.269 ************************************ 00:10:36.269 10:05:49 -- accel/accel.sh@116 -- # [[ n == y ]] 00:10:36.269 10:05:49 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:36.269 10:05:49 -- accel/accel.sh@129 -- # build_accel_config 00:10:36.269 10:05:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:36.269 10:05:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.269 10:05:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:36.269 10:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:36.269 10:05:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.269 10:05:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.269 10:05:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:36.269 10:05:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:36.269 10:05:49 -- accel/accel.sh@41 -- # local IFS=, 00:10:36.269 10:05:49 -- accel/accel.sh@42 -- # jq -r . 00:10:36.269 ************************************ 00:10:36.269 START TEST accel_dif_functional_tests 00:10:36.269 ************************************ 00:10:36.269 10:05:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:36.269 [2024-04-24 10:05:49.332652] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:36.269 [2024-04-24 10:05:49.332701] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166162 ] 00:10:36.269 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.269 [2024-04-24 10:05:49.384517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:36.269 [2024-04-24 10:05:49.455769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.269 [2024-04-24 10:05:49.455865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.269 [2024-04-24 10:05:49.455868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.269 00:10:36.269 00:10:36.269 CUnit - A unit testing framework for C - Version 2.1-3 00:10:36.269 http://cunit.sourceforge.net/ 00:10:36.269 00:10:36.269 00:10:36.269 Suite: accel_dif 00:10:36.269 Test: verify: DIF generated, GUARD check ...passed 00:10:36.269 Test: verify: DIF generated, APPTAG check ...passed 00:10:36.269 Test: verify: DIF generated, REFTAG check ...passed 00:10:36.269 Test: verify: DIF not generated, GUARD check ...[2024-04-24 10:05:49.524474] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:36.269 [2024-04-24 10:05:49.524517] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:36.269 passed 00:10:36.269 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 10:05:49.524564] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:36.269 [2024-04-24 10:05:49.524579] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:36.269 passed 00:10:36.269 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 10:05:49.524595] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:36.269 [2024-04-24 10:05:49.524609] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:36.269 passed 00:10:36.269 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:36.269 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 10:05:49.524649] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:36.269 passed 00:10:36.269 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:36.269 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:36.269 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:36.269 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 10:05:49.524750] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:36.269 passed 00:10:36.269 Test: generate copy: DIF generated, GUARD check ...passed 00:10:36.269 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:36.269 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:36.269 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:36.269 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:36.269 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:36.269 Test: generate copy: iovecs-len validate ...[2024-04-24 10:05:49.524913] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:36.269 passed 00:10:36.269 Test: generate copy: buffer alignment validate ...passed 00:10:36.269 00:10:36.269 Run Summary: Type Total Ran Passed Failed Inactive 00:10:36.269 suites 1 1 n/a 0 0 00:10:36.269 tests 20 20 20 0 0 00:10:36.269 asserts 204 204 204 0 n/a 00:10:36.269 00:10:36.269 Elapsed time = 0.000 seconds 00:10:36.528 00:10:36.528 real 0m0.424s 00:10:36.528 user 0m0.641s 00:10:36.528 sys 0m0.139s 00:10:36.528 10:05:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.528 10:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:36.528 ************************************ 00:10:36.528 END TEST accel_dif_functional_tests 00:10:36.528 ************************************ 00:10:36.528 00:10:36.528 real 0m57.849s 00:10:36.528 user 1m6.463s 00:10:36.528 sys 0m6.086s 00:10:36.528 10:05:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.528 10:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:36.528 ************************************ 00:10:36.528 END TEST accel 00:10:36.528 ************************************ 00:10:36.528 10:05:49 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:36.528 10:05:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:36.528 10:05:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.528 10:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:36.528 ************************************ 00:10:36.528 START TEST accel_rpc 00:10:36.528 ************************************ 00:10:36.528 10:05:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:36.787 * Looking for test storage... 00:10:36.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:10:36.787 10:05:49 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:36.787 10:05:49 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=166435 00:10:36.787 10:05:49 -- accel/accel_rpc.sh@15 -- # waitforlisten 166435 00:10:36.787 10:05:49 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:36.787 10:05:49 -- common/autotest_common.sh@819 -- # '[' -z 166435 ']' 00:10:36.787 10:05:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.787 10:05:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:36.787 10:05:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.787 10:05:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:36.787 10:05:49 -- common/autotest_common.sh@10 -- # set +x 00:10:36.787 [2024-04-24 10:05:49.920074] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:36.787 [2024-04-24 10:05:49.920124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166435 ] 00:10:36.787 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.787 [2024-04-24 10:05:49.972403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.787 [2024-04-24 10:05:50.053523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:36.787 [2024-04-24 10:05:50.053635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.744 10:05:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:37.744 10:05:50 -- common/autotest_common.sh@852 -- # return 0 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:37.744 10:05:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:37.744 10:05:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.744 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:10:37.744 ************************************ 00:10:37.744 START TEST accel_assign_opcode 00:10:37.744 ************************************ 00:10:37.744 10:05:50 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:37.744 10:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:37.744 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:10:37.744 [2024-04-24 10:05:50.727595] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:37.744 10:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:37.744 10:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:37.744 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:10:37.744 [2024-04-24 10:05:50.735608] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:37.744 10:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:37.744 10:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:37.744 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:10:37.744 10:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:37.744 10:05:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@42 -- # grep software 00:10:37.744 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:10:37.744 10:05:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:37.744 software 00:10:37.744 00:10:37.744 real 0m0.235s 00:10:37.744 user 0m0.044s 00:10:37.744 sys 0m0.011s 00:10:37.744 10:05:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.744 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:10:37.744 ************************************ 00:10:37.744 END TEST accel_assign_opcode 00:10:37.744 ************************************ 00:10:37.744 10:05:50 -- accel/accel_rpc.sh@55 -- # killprocess 166435 00:10:37.744 10:05:50 -- common/autotest_common.sh@926 -- # '[' -z 166435 ']' 00:10:37.744 10:05:50 -- common/autotest_common.sh@930 -- # kill -0 166435 00:10:37.744 10:05:50 -- common/autotest_common.sh@931 -- # uname 00:10:37.744 10:05:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:37.744 10:05:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 166435 00:10:38.002 10:05:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:38.002 10:05:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:38.002 10:05:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 166435' 00:10:38.002 killing process with pid 166435 00:10:38.002 10:05:51 -- common/autotest_common.sh@945 -- # kill 166435 00:10:38.002 10:05:51 -- common/autotest_common.sh@950 -- # wait 166435 00:10:38.261 00:10:38.261 real 0m1.575s 00:10:38.261 user 0m1.643s 00:10:38.261 sys 0m0.395s 00:10:38.261 10:05:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.261 10:05:51 -- common/autotest_common.sh@10 -- # set +x 00:10:38.261 ************************************ 00:10:38.261 END TEST accel_rpc 00:10:38.261 ************************************ 00:10:38.261 10:05:51 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:38.261 10:05:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:38.261 10:05:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:38.261 10:05:51 -- common/autotest_common.sh@10 -- # set +x 00:10:38.261 ************************************ 00:10:38.261 START TEST app_cmdline 00:10:38.261 ************************************ 00:10:38.261 10:05:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:38.261 * Looking for test storage... 00:10:38.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:38.261 10:05:51 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:38.261 10:05:51 -- app/cmdline.sh@17 -- # spdk_tgt_pid=166742 00:10:38.261 10:05:51 -- app/cmdline.sh@18 -- # waitforlisten 166742 00:10:38.261 10:05:51 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:38.261 10:05:51 -- common/autotest_common.sh@819 -- # '[' -z 166742 ']' 00:10:38.261 10:05:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.261 10:05:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:38.261 10:05:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.261 10:05:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:38.261 10:05:51 -- common/autotest_common.sh@10 -- # set +x 00:10:38.261 [2024-04-24 10:05:51.532433] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:38.261 [2024-04-24 10:05:51.532485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166742 ] 00:10:38.520 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.520 [2024-04-24 10:05:51.585884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.520 [2024-04-24 10:05:51.656960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:38.520 [2024-04-24 10:05:51.657077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.087 10:05:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:39.087 10:05:52 -- common/autotest_common.sh@852 -- # return 0 00:10:39.087 10:05:52 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:39.345 { 00:10:39.345 "version": "SPDK v24.01.1-pre git sha1 36faa8c312b", 00:10:39.345 "fields": { 00:10:39.345 "major": 24, 00:10:39.345 "minor": 1, 00:10:39.345 "patch": 1, 00:10:39.345 "suffix": "-pre", 00:10:39.345 "commit": "36faa8c312b" 00:10:39.345 } 00:10:39.345 } 00:10:39.345 10:05:52 -- app/cmdline.sh@22 -- # expected_methods=() 00:10:39.345 10:05:52 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:39.345 10:05:52 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:39.345 10:05:52 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:39.345 10:05:52 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:39.345 10:05:52 -- app/cmdline.sh@26 -- # sort 00:10:39.345 10:05:52 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:39.345 10:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:39.345 10:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:39.345 10:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:39.345 10:05:52 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:39.345 10:05:52 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:39.345 10:05:52 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:39.345 10:05:52 -- common/autotest_common.sh@640 -- # local es=0 00:10:39.345 10:05:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:39.345 10:05:52 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.345 10:05:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:39.345 10:05:52 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.345 10:05:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:39.345 10:05:52 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.345 10:05:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:39.345 10:05:52 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.345 10:05:52 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:39.345 10:05:52 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:39.603 request: 00:10:39.603 { 00:10:39.603 "method": "env_dpdk_get_mem_stats", 00:10:39.603 "req_id": 1 00:10:39.603 } 00:10:39.603 Got JSON-RPC error response 00:10:39.603 response: 00:10:39.603 { 00:10:39.603 "code": -32601, 00:10:39.603 "message": "Method not found" 00:10:39.603 } 00:10:39.603 10:05:52 -- common/autotest_common.sh@643 -- # es=1 00:10:39.603 10:05:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:39.603 10:05:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:39.603 10:05:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:39.603 10:05:52 -- app/cmdline.sh@1 -- # killprocess 166742 00:10:39.603 10:05:52 -- common/autotest_common.sh@926 -- # '[' -z 166742 ']' 00:10:39.603 10:05:52 -- common/autotest_common.sh@930 -- # kill -0 166742 00:10:39.603 10:05:52 -- common/autotest_common.sh@931 -- # uname 00:10:39.603 10:05:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:39.603 10:05:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 166742 00:10:39.603 10:05:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:39.603 10:05:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:39.603 10:05:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 166742' 00:10:39.603 killing process with pid 166742 00:10:39.603 10:05:52 -- common/autotest_common.sh@945 -- # kill 166742 00:10:39.603 10:05:52 -- common/autotest_common.sh@950 -- # wait 166742 00:10:39.861 00:10:39.861 real 0m1.677s 00:10:39.861 user 0m1.994s 00:10:39.861 sys 0m0.416s 00:10:39.861 10:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.861 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:10:39.861 ************************************ 00:10:39.861 END TEST app_cmdline 00:10:39.861 ************************************ 00:10:39.861 10:05:53 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:39.861 10:05:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:39.861 10:05:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:39.861 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:10:39.861 ************************************ 00:10:39.861 START TEST version 00:10:39.861 ************************************ 00:10:39.861 10:05:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:40.119 * Looking for test storage... 00:10:40.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:40.119 10:05:53 -- app/version.sh@17 -- # get_header_version major 00:10:40.119 10:05:53 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:40.119 10:05:53 -- app/version.sh@14 -- # cut -f2 00:10:40.119 10:05:53 -- app/version.sh@14 -- # tr -d '"' 00:10:40.119 10:05:53 -- app/version.sh@17 -- # major=24 00:10:40.119 10:05:53 -- app/version.sh@18 -- # get_header_version minor 00:10:40.119 10:05:53 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:40.119 10:05:53 -- app/version.sh@14 -- # cut -f2 00:10:40.119 10:05:53 -- app/version.sh@14 -- # tr -d '"' 00:10:40.119 10:05:53 -- app/version.sh@18 -- # minor=1 00:10:40.119 10:05:53 -- app/version.sh@19 -- # get_header_version patch 00:10:40.119 10:05:53 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:40.119 10:05:53 -- app/version.sh@14 -- # cut -f2 00:10:40.119 10:05:53 -- app/version.sh@14 -- # tr -d '"' 00:10:40.119 10:05:53 -- app/version.sh@19 -- # patch=1 00:10:40.119 10:05:53 -- app/version.sh@20 -- # get_header_version suffix 00:10:40.119 10:05:53 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:40.119 10:05:53 -- app/version.sh@14 -- # cut -f2 00:10:40.120 10:05:53 -- app/version.sh@14 -- # tr -d '"' 00:10:40.120 10:05:53 -- app/version.sh@20 -- # suffix=-pre 00:10:40.120 10:05:53 -- app/version.sh@22 -- # version=24.1 00:10:40.120 10:05:53 -- app/version.sh@25 -- # (( patch != 0 )) 00:10:40.120 10:05:53 -- app/version.sh@25 -- # version=24.1.1 00:10:40.120 10:05:53 -- app/version.sh@28 -- # version=24.1.1rc0 00:10:40.120 10:05:53 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:40.120 10:05:53 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:40.120 10:05:53 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:10:40.120 10:05:53 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:10:40.120 00:10:40.120 real 0m0.152s 00:10:40.120 user 0m0.082s 00:10:40.120 sys 0m0.104s 00:10:40.120 10:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.120 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:10:40.120 ************************************ 00:10:40.120 END TEST version 00:10:40.120 ************************************ 00:10:40.120 10:05:53 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:10:40.120 10:05:53 -- spdk/autotest.sh@204 -- # uname -s 00:10:40.120 10:05:53 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:10:40.120 10:05:53 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:10:40.120 10:05:53 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:10:40.120 10:05:53 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:10:40.120 10:05:53 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:10:40.120 10:05:53 -- spdk/autotest.sh@268 -- # timing_exit lib 00:10:40.120 10:05:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:40.120 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:10:40.120 10:05:53 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:10:40.120 10:05:53 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:10:40.120 10:05:53 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:10:40.120 10:05:53 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:10:40.120 10:05:53 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:10:40.120 10:05:53 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:10:40.120 10:05:53 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:40.120 10:05:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:40.120 10:05:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:40.120 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:10:40.120 ************************************ 00:10:40.120 START TEST nvmf_tcp 00:10:40.120 ************************************ 00:10:40.120 10:05:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:40.379 * Looking for test storage... 00:10:40.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:40.379 10:05:53 -- nvmf/nvmf.sh@10 -- # uname -s 00:10:40.379 10:05:53 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:40.379 10:05:53 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.379 10:05:53 -- nvmf/common.sh@7 -- # uname -s 00:10:40.379 10:05:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.379 10:05:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.379 10:05:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.379 10:05:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.379 10:05:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.379 10:05:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.379 10:05:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.379 10:05:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.379 10:05:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.379 10:05:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.379 10:05:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.379 10:05:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.379 10:05:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.379 10:05:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.379 10:05:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.379 10:05:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.379 10:05:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.379 10:05:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.379 10:05:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.379 10:05:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.379 10:05:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.379 10:05:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.379 10:05:53 -- paths/export.sh@5 -- # export PATH 00:10:40.379 10:05:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.379 10:05:53 -- nvmf/common.sh@46 -- # : 0 00:10:40.379 10:05:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:40.379 10:05:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:40.379 10:05:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:40.379 10:05:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.379 10:05:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.379 10:05:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:40.379 10:05:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:40.379 10:05:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:40.379 10:05:53 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:40.379 10:05:53 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:40.379 10:05:53 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:40.379 10:05:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:40.379 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:10:40.379 10:05:53 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:10:40.379 10:05:53 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:40.379 10:05:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:40.379 10:05:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:40.379 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:10:40.379 ************************************ 00:10:40.379 START TEST nvmf_example 00:10:40.379 ************************************ 00:10:40.379 10:05:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:40.379 * Looking for test storage... 00:10:40.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.379 10:05:53 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.379 10:05:53 -- nvmf/common.sh@7 -- # uname -s 00:10:40.379 10:05:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.379 10:05:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.379 10:05:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.379 10:05:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.379 10:05:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.379 10:05:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.379 10:05:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.379 10:05:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.379 10:05:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.379 10:05:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.379 10:05:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.379 10:05:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.379 10:05:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.379 10:05:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.379 10:05:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.379 10:05:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.379 10:05:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.379 10:05:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.379 10:05:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.379 10:05:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.379 10:05:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.379 10:05:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.379 10:05:53 -- paths/export.sh@5 -- # export PATH 00:10:40.379 10:05:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.379 10:05:53 -- nvmf/common.sh@46 -- # : 0 00:10:40.379 10:05:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:40.379 10:05:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:40.379 10:05:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:40.379 10:05:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.379 10:05:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.379 10:05:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:40.379 10:05:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:40.379 10:05:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:40.379 10:05:53 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:40.379 10:05:53 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:40.379 10:05:53 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:40.379 10:05:53 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:40.379 10:05:53 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:40.379 10:05:53 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:40.379 10:05:53 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:40.379 10:05:53 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:40.379 10:05:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:40.379 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:10:40.379 10:05:53 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:40.379 10:05:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:40.379 10:05:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.379 10:05:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:40.379 10:05:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:40.379 10:05:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:40.379 10:05:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.379 10:05:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.379 10:05:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.379 10:05:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:10:40.379 10:05:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:10:40.379 10:05:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:10:40.379 10:05:53 -- common/autotest_common.sh@10 -- # set +x 00:10:45.645 10:05:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:45.645 10:05:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:10:45.645 10:05:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:10:45.645 10:05:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:10:45.645 10:05:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:10:45.645 10:05:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:10:45.645 10:05:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:10:45.645 10:05:58 -- nvmf/common.sh@294 -- # net_devs=() 00:10:45.645 10:05:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:10:45.645 10:05:58 -- nvmf/common.sh@295 -- # e810=() 00:10:45.645 10:05:58 -- nvmf/common.sh@295 -- # local -ga e810 00:10:45.645 10:05:58 -- nvmf/common.sh@296 -- # x722=() 00:10:45.645 10:05:58 -- nvmf/common.sh@296 -- # local -ga x722 00:10:45.645 10:05:58 -- nvmf/common.sh@297 -- # mlx=() 00:10:45.645 10:05:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:10:45.645 10:05:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.645 10:05:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:10:45.645 10:05:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:10:45.645 10:05:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:10:45.645 10:05:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:45.645 10:05:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:45.645 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:45.645 10:05:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:45.645 10:05:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:45.645 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:45.645 10:05:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:10:45.645 10:05:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:45.645 10:05:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.645 10:05:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:45.645 10:05:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.645 10:05:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:45.645 Found net devices under 0000:86:00.0: cvl_0_0 00:10:45.645 10:05:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.645 10:05:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:45.645 10:05:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.645 10:05:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:45.645 10:05:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.645 10:05:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:45.645 Found net devices under 0000:86:00.1: cvl_0_1 00:10:45.645 10:05:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.645 10:05:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:10:45.645 10:05:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:10:45.645 10:05:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:10:45.645 10:05:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:10:45.645 10:05:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.645 10:05:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.645 10:05:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.645 10:05:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:10:45.645 10:05:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.645 10:05:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.645 10:05:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:10:45.645 10:05:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.645 10:05:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.645 10:05:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:10:45.645 10:05:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:10:45.645 10:05:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.645 10:05:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.646 10:05:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.646 10:05:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.646 10:05:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:10:45.646 10:05:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.904 10:05:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.904 10:05:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.904 10:05:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:10:45.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:10:45.904 00:10:45.904 --- 10.0.0.2 ping statistics --- 00:10:45.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.904 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:10:45.904 10:05:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:10:45.904 00:10:45.904 --- 10.0.0.1 ping statistics --- 00:10:45.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.904 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:10:45.904 10:05:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.904 10:05:59 -- nvmf/common.sh@410 -- # return 0 00:10:45.904 10:05:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:45.904 10:05:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.904 10:05:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:45.904 10:05:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:45.904 10:05:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.904 10:05:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:45.904 10:05:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:45.904 10:05:59 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:45.904 10:05:59 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:45.904 10:05:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:45.904 10:05:59 -- common/autotest_common.sh@10 -- # set +x 00:10:45.904 10:05:59 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:45.904 10:05:59 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:45.904 10:05:59 -- target/nvmf_example.sh@34 -- # nvmfpid=170229 00:10:45.904 10:05:59 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:45.904 10:05:59 -- target/nvmf_example.sh@36 -- # waitforlisten 170229 00:10:45.904 10:05:59 -- common/autotest_common.sh@819 -- # '[' -z 170229 ']' 00:10:45.904 10:05:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.904 10:05:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:45.904 10:05:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.904 10:05:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:45.904 10:05:59 -- common/autotest_common.sh@10 -- # set +x 00:10:45.904 10:05:59 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:45.904 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.840 10:05:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:46.840 10:05:59 -- common/autotest_common.sh@852 -- # return 0 00:10:46.840 10:05:59 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:46.840 10:05:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:46.840 10:05:59 -- common/autotest_common.sh@10 -- # set +x 00:10:46.840 10:05:59 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.840 10:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.840 10:05:59 -- common/autotest_common.sh@10 -- # set +x 00:10:46.840 10:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.840 10:05:59 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:46.840 10:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.840 10:05:59 -- common/autotest_common.sh@10 -- # set +x 00:10:46.840 10:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.840 10:05:59 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:46.840 10:05:59 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.840 10:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.840 10:05:59 -- common/autotest_common.sh@10 -- # set +x 00:10:46.840 10:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.840 10:05:59 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:46.840 10:05:59 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.840 10:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.840 10:05:59 -- common/autotest_common.sh@10 -- # set +x 00:10:46.840 10:05:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.840 10:05:59 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.840 10:05:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.840 10:05:59 -- common/autotest_common.sh@10 -- # set +x 00:10:46.840 10:06:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.840 10:06:00 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:46.840 10:06:00 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:46.840 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.047 Initializing NVMe Controllers 00:10:59.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:59.047 Initialization complete. Launching workers. 00:10:59.047 ======================================================== 00:10:59.047 Latency(us) 00:10:59.047 Device Information : IOPS MiB/s Average min max 00:10:59.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17790.34 69.49 3597.26 709.06 19244.75 00:10:59.047 ======================================================== 00:10:59.047 Total : 17790.34 69.49 3597.26 709.06 19244.75 00:10:59.047 00:10:59.047 10:06:10 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:59.047 10:06:10 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:59.047 10:06:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:59.047 10:06:10 -- nvmf/common.sh@116 -- # sync 00:10:59.047 10:06:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:59.047 10:06:10 -- nvmf/common.sh@119 -- # set +e 00:10:59.047 10:06:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:59.047 10:06:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:59.047 rmmod nvme_tcp 00:10:59.047 rmmod nvme_fabrics 00:10:59.047 rmmod nvme_keyring 00:10:59.047 10:06:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:59.047 10:06:10 -- nvmf/common.sh@123 -- # set -e 00:10:59.047 10:06:10 -- nvmf/common.sh@124 -- # return 0 00:10:59.047 10:06:10 -- nvmf/common.sh@477 -- # '[' -n 170229 ']' 00:10:59.047 10:06:10 -- nvmf/common.sh@478 -- # killprocess 170229 00:10:59.047 10:06:10 -- common/autotest_common.sh@926 -- # '[' -z 170229 ']' 00:10:59.047 10:06:10 -- common/autotest_common.sh@930 -- # kill -0 170229 00:10:59.047 10:06:10 -- common/autotest_common.sh@931 -- # uname 00:10:59.047 10:06:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:59.047 10:06:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 170229 00:10:59.047 10:06:10 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:10:59.047 10:06:10 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:10:59.047 10:06:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 170229' 00:10:59.047 killing process with pid 170229 00:10:59.047 10:06:10 -- common/autotest_common.sh@945 -- # kill 170229 00:10:59.047 10:06:10 -- common/autotest_common.sh@950 -- # wait 170229 00:10:59.047 nvmf threads initialize successfully 00:10:59.047 bdev subsystem init successfully 00:10:59.047 created a nvmf target service 00:10:59.047 create targets's poll groups done 00:10:59.047 all subsystems of target started 00:10:59.047 nvmf target is running 00:10:59.047 all subsystems of target stopped 00:10:59.047 destroy targets's poll groups done 00:10:59.047 destroyed the nvmf target service 00:10:59.047 bdev subsystem finish successfully 00:10:59.047 nvmf threads destroy successfully 00:10:59.047 10:06:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:59.047 10:06:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:59.047 10:06:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:59.047 10:06:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.047 10:06:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:59.047 10:06:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.047 10:06:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.047 10:06:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.306 10:06:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:10:59.306 10:06:12 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:59.306 10:06:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:59.306 10:06:12 -- common/autotest_common.sh@10 -- # set +x 00:10:59.306 00:10:59.306 real 0m19.094s 00:10:59.306 user 0m45.542s 00:10:59.306 sys 0m5.414s 00:10:59.306 10:06:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.306 10:06:12 -- common/autotest_common.sh@10 -- # set +x 00:10:59.306 ************************************ 00:10:59.306 END TEST nvmf_example 00:10:59.306 ************************************ 00:10:59.567 10:06:12 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:59.567 10:06:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:59.567 10:06:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.567 10:06:12 -- common/autotest_common.sh@10 -- # set +x 00:10:59.567 ************************************ 00:10:59.567 START TEST nvmf_filesystem 00:10:59.567 ************************************ 00:10:59.567 10:06:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:59.567 * Looking for test storage... 00:10:59.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.567 10:06:12 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:59.567 10:06:12 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:59.567 10:06:12 -- common/autotest_common.sh@34 -- # set -e 00:10:59.567 10:06:12 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:59.567 10:06:12 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:59.567 10:06:12 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:59.567 10:06:12 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:59.567 10:06:12 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:59.567 10:06:12 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:59.568 10:06:12 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:59.568 10:06:12 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:59.568 10:06:12 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:59.568 10:06:12 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:59.568 10:06:12 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:59.568 10:06:12 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:59.568 10:06:12 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:59.568 10:06:12 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:59.568 10:06:12 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:59.568 10:06:12 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:59.568 10:06:12 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:59.568 10:06:12 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:59.568 10:06:12 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:59.568 10:06:12 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:59.568 10:06:12 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:59.568 10:06:12 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:59.568 10:06:12 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:59.568 10:06:12 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:59.568 10:06:12 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:59.568 10:06:12 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:59.568 10:06:12 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:59.568 10:06:12 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:59.568 10:06:12 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:59.568 10:06:12 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:59.568 10:06:12 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:59.568 10:06:12 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:59.568 10:06:12 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:59.568 10:06:12 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:59.568 10:06:12 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:59.568 10:06:12 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:59.568 10:06:12 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:59.568 10:06:12 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:59.568 10:06:12 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:59.568 10:06:12 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:59.568 10:06:12 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:59.568 10:06:12 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:59.568 10:06:12 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:59.568 10:06:12 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:59.568 10:06:12 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:59.568 10:06:12 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:59.568 10:06:12 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:59.568 10:06:12 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:59.568 10:06:12 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:59.568 10:06:12 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:10:59.568 10:06:12 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:10:59.568 10:06:12 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:59.568 10:06:12 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:10:59.568 10:06:12 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:10:59.568 10:06:12 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:10:59.568 10:06:12 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:10:59.568 10:06:12 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:10:59.568 10:06:12 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:10:59.568 10:06:12 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:10:59.568 10:06:12 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:10:59.568 10:06:12 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:10:59.568 10:06:12 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:10:59.568 10:06:12 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:10:59.568 10:06:12 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:10:59.568 10:06:12 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:10:59.568 10:06:12 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:10:59.568 10:06:12 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:10:59.568 10:06:12 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:10:59.568 10:06:12 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:10:59.568 10:06:12 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:59.568 10:06:12 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:10:59.568 10:06:12 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:10:59.568 10:06:12 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:10:59.568 10:06:12 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:10:59.568 10:06:12 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:10:59.568 10:06:12 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:10:59.568 10:06:12 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:10:59.568 10:06:12 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:10:59.568 10:06:12 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:10:59.568 10:06:12 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:10:59.568 10:06:12 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:59.568 10:06:12 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:10:59.568 10:06:12 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:10:59.568 10:06:12 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:59.568 10:06:12 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:59.568 10:06:12 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:59.568 10:06:12 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:59.568 10:06:12 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:59.568 10:06:12 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:59.568 10:06:12 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:59.568 10:06:12 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:59.568 10:06:12 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:59.568 10:06:12 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:59.568 10:06:12 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:59.568 10:06:12 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:59.568 10:06:12 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:59.568 10:06:12 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:59.568 10:06:12 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:59.568 10:06:12 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:59.568 #define SPDK_CONFIG_H 00:10:59.568 #define SPDK_CONFIG_APPS 1 00:10:59.568 #define SPDK_CONFIG_ARCH native 00:10:59.568 #undef SPDK_CONFIG_ASAN 00:10:59.568 #undef SPDK_CONFIG_AVAHI 00:10:59.568 #undef SPDK_CONFIG_CET 00:10:59.568 #define SPDK_CONFIG_COVERAGE 1 00:10:59.568 #define SPDK_CONFIG_CROSS_PREFIX 00:10:59.568 #undef SPDK_CONFIG_CRYPTO 00:10:59.568 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:59.568 #undef SPDK_CONFIG_CUSTOMOCF 00:10:59.568 #undef SPDK_CONFIG_DAOS 00:10:59.568 #define SPDK_CONFIG_DAOS_DIR 00:10:59.568 #define SPDK_CONFIG_DEBUG 1 00:10:59.568 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:59.568 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:59.568 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:59.568 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:59.568 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:59.568 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:59.568 #define SPDK_CONFIG_EXAMPLES 1 00:10:59.568 #undef SPDK_CONFIG_FC 00:10:59.568 #define SPDK_CONFIG_FC_PATH 00:10:59.568 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:59.568 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:59.568 #undef SPDK_CONFIG_FUSE 00:10:59.568 #undef SPDK_CONFIG_FUZZER 00:10:59.568 #define SPDK_CONFIG_FUZZER_LIB 00:10:59.568 #undef SPDK_CONFIG_GOLANG 00:10:59.568 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:59.568 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:59.568 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:59.568 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:59.568 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:59.568 #define SPDK_CONFIG_IDXD 1 00:10:59.568 #undef SPDK_CONFIG_IDXD_KERNEL 00:10:59.568 #undef SPDK_CONFIG_IPSEC_MB 00:10:59.568 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:59.568 #define SPDK_CONFIG_ISAL 1 00:10:59.568 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:59.568 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:59.568 #define SPDK_CONFIG_LIBDIR 00:10:59.568 #undef SPDK_CONFIG_LTO 00:10:59.568 #define SPDK_CONFIG_MAX_LCORES 00:10:59.568 #define SPDK_CONFIG_NVME_CUSE 1 00:10:59.568 #undef SPDK_CONFIG_OCF 00:10:59.568 #define SPDK_CONFIG_OCF_PATH 00:10:59.568 #define SPDK_CONFIG_OPENSSL_PATH 00:10:59.568 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:59.568 #undef SPDK_CONFIG_PGO_USE 00:10:59.568 #define SPDK_CONFIG_PREFIX /usr/local 00:10:59.568 #undef SPDK_CONFIG_RAID5F 00:10:59.568 #undef SPDK_CONFIG_RBD 00:10:59.568 #define SPDK_CONFIG_RDMA 1 00:10:59.568 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:59.568 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:59.568 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:59.569 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:59.569 #define SPDK_CONFIG_SHARED 1 00:10:59.569 #undef SPDK_CONFIG_SMA 00:10:59.569 #define SPDK_CONFIG_TESTS 1 00:10:59.569 #undef SPDK_CONFIG_TSAN 00:10:59.569 #define SPDK_CONFIG_UBLK 1 00:10:59.569 #define SPDK_CONFIG_UBSAN 1 00:10:59.569 #undef SPDK_CONFIG_UNIT_TESTS 00:10:59.569 #undef SPDK_CONFIG_URING 00:10:59.569 #define SPDK_CONFIG_URING_PATH 00:10:59.569 #undef SPDK_CONFIG_URING_ZNS 00:10:59.569 #undef SPDK_CONFIG_USDT 00:10:59.569 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:59.569 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:59.569 #undef SPDK_CONFIG_VFIO_USER 00:10:59.569 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:59.569 #define SPDK_CONFIG_VHOST 1 00:10:59.569 #define SPDK_CONFIG_VIRTIO 1 00:10:59.569 #undef SPDK_CONFIG_VTUNE 00:10:59.569 #define SPDK_CONFIG_VTUNE_DIR 00:10:59.569 #define SPDK_CONFIG_WERROR 1 00:10:59.569 #define SPDK_CONFIG_WPDK_DIR 00:10:59.569 #undef SPDK_CONFIG_XNVME 00:10:59.569 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:59.569 10:06:12 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:59.569 10:06:12 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.569 10:06:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.569 10:06:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.569 10:06:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.569 10:06:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.569 10:06:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.569 10:06:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.569 10:06:12 -- paths/export.sh@5 -- # export PATH 00:10:59.569 10:06:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.569 10:06:12 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:59.569 10:06:12 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:59.569 10:06:12 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:59.569 10:06:12 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:59.569 10:06:12 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:59.569 10:06:12 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:59.569 10:06:12 -- pm/common@16 -- # TEST_TAG=N/A 00:10:59.569 10:06:12 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:59.569 10:06:12 -- common/autotest_common.sh@52 -- # : 1 00:10:59.569 10:06:12 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:10:59.569 10:06:12 -- common/autotest_common.sh@56 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:59.569 10:06:12 -- common/autotest_common.sh@58 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:10:59.569 10:06:12 -- common/autotest_common.sh@60 -- # : 1 00:10:59.569 10:06:12 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:59.569 10:06:12 -- common/autotest_common.sh@62 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:10:59.569 10:06:12 -- common/autotest_common.sh@64 -- # : 00:10:59.569 10:06:12 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:10:59.569 10:06:12 -- common/autotest_common.sh@66 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:10:59.569 10:06:12 -- common/autotest_common.sh@68 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:10:59.569 10:06:12 -- common/autotest_common.sh@70 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:10:59.569 10:06:12 -- common/autotest_common.sh@72 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:59.569 10:06:12 -- common/autotest_common.sh@74 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:10:59.569 10:06:12 -- common/autotest_common.sh@76 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:10:59.569 10:06:12 -- common/autotest_common.sh@78 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:10:59.569 10:06:12 -- common/autotest_common.sh@80 -- # : 1 00:10:59.569 10:06:12 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:10:59.569 10:06:12 -- common/autotest_common.sh@82 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:10:59.569 10:06:12 -- common/autotest_common.sh@84 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:10:59.569 10:06:12 -- common/autotest_common.sh@86 -- # : 1 00:10:59.569 10:06:12 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:10:59.569 10:06:12 -- common/autotest_common.sh@88 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:10:59.569 10:06:12 -- common/autotest_common.sh@90 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:59.569 10:06:12 -- common/autotest_common.sh@92 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:10:59.569 10:06:12 -- common/autotest_common.sh@94 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:10:59.569 10:06:12 -- common/autotest_common.sh@96 -- # : tcp 00:10:59.569 10:06:12 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:59.569 10:06:12 -- common/autotest_common.sh@98 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:10:59.569 10:06:12 -- common/autotest_common.sh@100 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:10:59.569 10:06:12 -- common/autotest_common.sh@102 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:10:59.569 10:06:12 -- common/autotest_common.sh@104 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:10:59.569 10:06:12 -- common/autotest_common.sh@106 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:10:59.569 10:06:12 -- common/autotest_common.sh@108 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:10:59.569 10:06:12 -- common/autotest_common.sh@110 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:10:59.569 10:06:12 -- common/autotest_common.sh@112 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:59.569 10:06:12 -- common/autotest_common.sh@114 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:10:59.569 10:06:12 -- common/autotest_common.sh@116 -- # : 1 00:10:59.569 10:06:12 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:10:59.569 10:06:12 -- common/autotest_common.sh@118 -- # : 00:10:59.569 10:06:12 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:59.569 10:06:12 -- common/autotest_common.sh@120 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:10:59.569 10:06:12 -- common/autotest_common.sh@122 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:10:59.569 10:06:12 -- common/autotest_common.sh@124 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:10:59.569 10:06:12 -- common/autotest_common.sh@126 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:10:59.569 10:06:12 -- common/autotest_common.sh@128 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:10:59.569 10:06:12 -- common/autotest_common.sh@130 -- # : 0 00:10:59.569 10:06:12 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:10:59.569 10:06:12 -- common/autotest_common.sh@132 -- # : 00:10:59.569 10:06:12 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:10:59.570 10:06:12 -- common/autotest_common.sh@134 -- # : true 00:10:59.570 10:06:12 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:10:59.570 10:06:12 -- common/autotest_common.sh@136 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:10:59.570 10:06:12 -- common/autotest_common.sh@138 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:10:59.570 10:06:12 -- common/autotest_common.sh@140 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:10:59.570 10:06:12 -- common/autotest_common.sh@142 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:10:59.570 10:06:12 -- common/autotest_common.sh@144 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:10:59.570 10:06:12 -- common/autotest_common.sh@146 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:10:59.570 10:06:12 -- common/autotest_common.sh@148 -- # : e810 00:10:59.570 10:06:12 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:10:59.570 10:06:12 -- common/autotest_common.sh@150 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:10:59.570 10:06:12 -- common/autotest_common.sh@152 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:10:59.570 10:06:12 -- common/autotest_common.sh@154 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:10:59.570 10:06:12 -- common/autotest_common.sh@156 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:10:59.570 10:06:12 -- common/autotest_common.sh@158 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:10:59.570 10:06:12 -- common/autotest_common.sh@160 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:10:59.570 10:06:12 -- common/autotest_common.sh@163 -- # : 00:10:59.570 10:06:12 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:10:59.570 10:06:12 -- common/autotest_common.sh@165 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:10:59.570 10:06:12 -- common/autotest_common.sh@167 -- # : 0 00:10:59.570 10:06:12 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:59.570 10:06:12 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:59.570 10:06:12 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:59.570 10:06:12 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:59.570 10:06:12 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:59.570 10:06:12 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:59.570 10:06:12 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:59.570 10:06:12 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:59.570 10:06:12 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:59.570 10:06:12 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:59.570 10:06:12 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:59.570 10:06:12 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:59.570 10:06:12 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:59.570 10:06:12 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:59.570 10:06:12 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:10:59.570 10:06:12 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:59.570 10:06:12 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:59.570 10:06:12 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:59.570 10:06:12 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:59.570 10:06:12 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:59.570 10:06:12 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:10:59.570 10:06:12 -- common/autotest_common.sh@196 -- # cat 00:10:59.570 10:06:12 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:10:59.570 10:06:12 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:59.570 10:06:12 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:59.570 10:06:12 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:59.570 10:06:12 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:59.570 10:06:12 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:10:59.570 10:06:12 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:10:59.570 10:06:12 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:59.570 10:06:12 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:59.570 10:06:12 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:59.570 10:06:12 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:59.570 10:06:12 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:59.570 10:06:12 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:59.570 10:06:12 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:59.570 10:06:12 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:59.570 10:06:12 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:59.570 10:06:12 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:59.570 10:06:12 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:59.570 10:06:12 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:59.570 10:06:12 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:10:59.570 10:06:12 -- common/autotest_common.sh@249 -- # export valgrind= 00:10:59.570 10:06:12 -- common/autotest_common.sh@249 -- # valgrind= 00:10:59.570 10:06:12 -- common/autotest_common.sh@255 -- # uname -s 00:10:59.570 10:06:12 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:10:59.570 10:06:12 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:10:59.570 10:06:12 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:10:59.570 10:06:12 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:10:59.570 10:06:12 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:10:59.570 10:06:12 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:10:59.570 10:06:12 -- common/autotest_common.sh@265 -- # MAKE=make 00:10:59.570 10:06:12 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j96 00:10:59.570 10:06:12 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:10:59.570 10:06:12 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:10:59.570 10:06:12 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:59.570 10:06:12 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:10:59.570 10:06:12 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:10:59.570 10:06:12 -- common/autotest_common.sh@291 -- # for i in "$@" 00:10:59.570 10:06:12 -- common/autotest_common.sh@292 -- # case "$i" in 00:10:59.570 10:06:12 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:10:59.570 10:06:12 -- common/autotest_common.sh@309 -- # [[ -z 173114 ]] 00:10:59.570 10:06:12 -- common/autotest_common.sh@309 -- # kill -0 173114 00:10:59.570 10:06:12 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:10:59.570 10:06:12 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:10:59.570 10:06:12 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:10:59.570 10:06:12 -- common/autotest_common.sh@322 -- # local mount target_dir 00:10:59.570 10:06:12 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:10:59.570 10:06:12 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:10:59.570 10:06:12 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:10:59.570 10:06:12 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:10:59.570 10:06:12 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.oGEAZ1 00:10:59.570 10:06:12 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:59.570 10:06:12 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:10:59.570 10:06:12 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:10:59.570 10:06:12 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.oGEAZ1/tests/target /tmp/spdk.oGEAZ1 00:10:59.570 10:06:12 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:10:59.571 10:06:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:59.571 10:06:12 -- common/autotest_common.sh@318 -- # df -T 00:10:59.571 10:06:12 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:10:59.571 10:06:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:10:59.571 10:06:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=996753408 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:10:59.571 10:06:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=4287676416 00:10:59.571 10:06:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=186814234624 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=195974328320 00:10:59.571 10:06:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=9160093696 00:10:59.571 10:06:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=97981456384 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987162112 00:10:59.571 10:06:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=5705728 00:10:59.571 10:06:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=39185489920 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=39194865664 00:10:59.571 10:06:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=9375744 00:10:59.571 10:06:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=97986641920 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987166208 00:10:59.571 10:06:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=524288 00:10:59.571 10:06:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # avails["$mount"]=19597426688 00:10:59.571 10:06:12 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19597430784 00:10:59.571 10:06:12 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:10:59.571 10:06:12 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:10:59.571 10:06:12 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:10:59.571 * Looking for test storage... 00:10:59.571 10:06:12 -- common/autotest_common.sh@359 -- # local target_space new_size 00:10:59.571 10:06:12 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:10:59.571 10:06:12 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.571 10:06:12 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:59.571 10:06:12 -- common/autotest_common.sh@363 -- # mount=/ 00:10:59.571 10:06:12 -- common/autotest_common.sh@365 -- # target_space=186814234624 00:10:59.571 10:06:12 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:10:59.571 10:06:12 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:10:59.571 10:06:12 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:10:59.571 10:06:12 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:10:59.571 10:06:12 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:10:59.571 10:06:12 -- common/autotest_common.sh@372 -- # new_size=11374686208 00:10:59.571 10:06:12 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:59.571 10:06:12 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.571 10:06:12 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.571 10:06:12 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.571 10:06:12 -- common/autotest_common.sh@380 -- # return 0 00:10:59.571 10:06:12 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:10:59.571 10:06:12 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:10:59.571 10:06:12 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:59.571 10:06:12 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:59.571 10:06:12 -- common/autotest_common.sh@1672 -- # true 00:10:59.571 10:06:12 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:10:59.571 10:06:12 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:59.571 10:06:12 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:59.571 10:06:12 -- common/autotest_common.sh@27 -- # exec 00:10:59.571 10:06:12 -- common/autotest_common.sh@29 -- # exec 00:10:59.571 10:06:12 -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:59.571 10:06:12 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:59.571 10:06:12 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:59.571 10:06:12 -- common/autotest_common.sh@18 -- # set -x 00:10:59.571 10:06:12 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.571 10:06:12 -- nvmf/common.sh@7 -- # uname -s 00:10:59.571 10:06:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.571 10:06:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.571 10:06:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.571 10:06:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.571 10:06:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.571 10:06:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.571 10:06:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.571 10:06:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.571 10:06:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.571 10:06:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.571 10:06:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:59.571 10:06:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:59.571 10:06:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.571 10:06:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.571 10:06:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.571 10:06:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.571 10:06:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.571 10:06:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.571 10:06:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.571 10:06:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.571 10:06:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.571 10:06:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.571 10:06:12 -- paths/export.sh@5 -- # export PATH 00:10:59.571 10:06:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.571 10:06:12 -- nvmf/common.sh@46 -- # : 0 00:10:59.571 10:06:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:59.571 10:06:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:59.571 10:06:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:59.571 10:06:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.571 10:06:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.571 10:06:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:59.571 10:06:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:59.571 10:06:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:59.571 10:06:12 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:59.571 10:06:12 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:59.571 10:06:12 -- target/filesystem.sh@15 -- # nvmftestinit 00:10:59.571 10:06:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:59.571 10:06:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.571 10:06:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:59.572 10:06:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:59.572 10:06:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:59.572 10:06:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.572 10:06:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.572 10:06:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.572 10:06:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:10:59.572 10:06:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:10:59.572 10:06:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:10:59.572 10:06:12 -- common/autotest_common.sh@10 -- # set +x 00:11:04.843 10:06:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:04.843 10:06:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:04.843 10:06:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:04.843 10:06:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:04.843 10:06:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:04.843 10:06:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:04.843 10:06:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:04.843 10:06:17 -- nvmf/common.sh@294 -- # net_devs=() 00:11:04.843 10:06:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:04.843 10:06:17 -- nvmf/common.sh@295 -- # e810=() 00:11:04.843 10:06:17 -- nvmf/common.sh@295 -- # local -ga e810 00:11:04.843 10:06:17 -- nvmf/common.sh@296 -- # x722=() 00:11:04.843 10:06:17 -- nvmf/common.sh@296 -- # local -ga x722 00:11:04.843 10:06:17 -- nvmf/common.sh@297 -- # mlx=() 00:11:04.843 10:06:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:04.843 10:06:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.843 10:06:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:04.843 10:06:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:04.843 10:06:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:04.843 10:06:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:04.843 10:06:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:04.843 10:06:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:04.844 10:06:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:04.844 10:06:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:04.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:04.844 10:06:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:04.844 10:06:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:04.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:04.844 10:06:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:04.844 10:06:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:04.844 10:06:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.844 10:06:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:04.844 10:06:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.844 10:06:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:04.844 Found net devices under 0000:86:00.0: cvl_0_0 00:11:04.844 10:06:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.844 10:06:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:04.844 10:06:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.844 10:06:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:04.844 10:06:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.844 10:06:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:04.844 Found net devices under 0000:86:00.1: cvl_0_1 00:11:04.844 10:06:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.844 10:06:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:04.844 10:06:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:04.844 10:06:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:04.844 10:06:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.844 10:06:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.844 10:06:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.844 10:06:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:04.844 10:06:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.844 10:06:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.844 10:06:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:04.844 10:06:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.844 10:06:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.844 10:06:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:04.844 10:06:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:04.844 10:06:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.844 10:06:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.844 10:06:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.844 10:06:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.844 10:06:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:04.844 10:06:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.844 10:06:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.844 10:06:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.844 10:06:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:04.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:11:04.844 00:11:04.844 --- 10.0.0.2 ping statistics --- 00:11:04.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.844 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:11:04.844 10:06:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:04.844 00:11:04.844 --- 10.0.0.1 ping statistics --- 00:11:04.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.844 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:04.844 10:06:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.844 10:06:17 -- nvmf/common.sh@410 -- # return 0 00:11:04.844 10:06:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:04.844 10:06:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.844 10:06:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:04.844 10:06:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.844 10:06:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:04.844 10:06:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:04.844 10:06:17 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:04.844 10:06:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:04.844 10:06:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:04.844 10:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:04.844 ************************************ 00:11:04.844 START TEST nvmf_filesystem_no_in_capsule 00:11:04.844 ************************************ 00:11:04.844 10:06:17 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:11:04.844 10:06:17 -- target/filesystem.sh@47 -- # in_capsule=0 00:11:04.844 10:06:17 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:04.844 10:06:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:04.844 10:06:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:04.844 10:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:04.844 10:06:17 -- nvmf/common.sh@469 -- # nvmfpid=176139 00:11:04.844 10:06:17 -- nvmf/common.sh@470 -- # waitforlisten 176139 00:11:04.844 10:06:17 -- common/autotest_common.sh@819 -- # '[' -z 176139 ']' 00:11:04.844 10:06:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.844 10:06:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:04.844 10:06:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.844 10:06:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:04.844 10:06:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.844 10:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:04.844 [2024-04-24 10:06:17.937037] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:04.844 [2024-04-24 10:06:17.937090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.844 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.844 [2024-04-24 10:06:17.993692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.844 [2024-04-24 10:06:18.072792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:04.844 [2024-04-24 10:06:18.072899] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.844 [2024-04-24 10:06:18.072907] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.844 [2024-04-24 10:06:18.072914] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.844 [2024-04-24 10:06:18.072954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.844 [2024-04-24 10:06:18.073053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.844 [2024-04-24 10:06:18.073074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.844 [2024-04-24 10:06:18.073076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.826 10:06:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:05.826 10:06:18 -- common/autotest_common.sh@852 -- # return 0 00:11:05.826 10:06:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:05.826 10:06:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:05.826 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:05.826 10:06:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.826 10:06:18 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:05.826 10:06:18 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:05.826 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.826 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:05.826 [2024-04-24 10:06:18.788450] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.826 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.826 10:06:18 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:05.826 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.826 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:05.826 Malloc1 00:11:05.826 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.826 10:06:18 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.826 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.826 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:05.826 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.826 10:06:18 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.826 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.826 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:05.826 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.826 10:06:18 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.826 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.826 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:05.826 [2024-04-24 10:06:18.937731] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.826 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.826 10:06:18 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:05.826 10:06:18 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:11:05.826 10:06:18 -- common/autotest_common.sh@1358 -- # local bdev_info 00:11:05.826 10:06:18 -- common/autotest_common.sh@1359 -- # local bs 00:11:05.826 10:06:18 -- common/autotest_common.sh@1360 -- # local nb 00:11:05.826 10:06:18 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:05.826 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.826 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:05.826 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.826 10:06:18 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:11:05.826 { 00:11:05.826 "name": "Malloc1", 00:11:05.826 "aliases": [ 00:11:05.826 "f6b76ad0-dae0-4bde-b41e-12fbc31fdc38" 00:11:05.826 ], 00:11:05.826 "product_name": "Malloc disk", 00:11:05.826 "block_size": 512, 00:11:05.826 "num_blocks": 1048576, 00:11:05.826 "uuid": "f6b76ad0-dae0-4bde-b41e-12fbc31fdc38", 00:11:05.826 "assigned_rate_limits": { 00:11:05.826 "rw_ios_per_sec": 0, 00:11:05.826 "rw_mbytes_per_sec": 0, 00:11:05.826 "r_mbytes_per_sec": 0, 00:11:05.826 "w_mbytes_per_sec": 0 00:11:05.826 }, 00:11:05.826 "claimed": true, 00:11:05.827 "claim_type": "exclusive_write", 00:11:05.827 "zoned": false, 00:11:05.827 "supported_io_types": { 00:11:05.827 "read": true, 00:11:05.827 "write": true, 00:11:05.827 "unmap": true, 00:11:05.827 "write_zeroes": true, 00:11:05.827 "flush": true, 00:11:05.827 "reset": true, 00:11:05.827 "compare": false, 00:11:05.827 "compare_and_write": false, 00:11:05.827 "abort": true, 00:11:05.827 "nvme_admin": false, 00:11:05.827 "nvme_io": false 00:11:05.827 }, 00:11:05.827 "memory_domains": [ 00:11:05.827 { 00:11:05.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.827 "dma_device_type": 2 00:11:05.827 } 00:11:05.827 ], 00:11:05.827 "driver_specific": {} 00:11:05.827 } 00:11:05.827 ]' 00:11:05.827 10:06:18 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:11:05.827 10:06:19 -- common/autotest_common.sh@1362 -- # bs=512 00:11:05.827 10:06:19 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:11:05.827 10:06:19 -- common/autotest_common.sh@1363 -- # nb=1048576 00:11:05.827 10:06:19 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:11:05.827 10:06:19 -- common/autotest_common.sh@1367 -- # echo 512 00:11:05.827 10:06:19 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:05.827 10:06:19 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.201 10:06:20 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.201 10:06:20 -- common/autotest_common.sh@1177 -- # local i=0 00:11:07.201 10:06:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.201 10:06:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:07.201 10:06:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:09.103 10:06:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:09.103 10:06:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:09.103 10:06:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.103 10:06:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:09.103 10:06:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.103 10:06:22 -- common/autotest_common.sh@1187 -- # return 0 00:11:09.103 10:06:22 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:09.103 10:06:22 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:09.103 10:06:22 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:09.103 10:06:22 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:09.103 10:06:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:09.103 10:06:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:09.103 10:06:22 -- setup/common.sh@80 -- # echo 536870912 00:11:09.103 10:06:22 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:09.103 10:06:22 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:09.103 10:06:22 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:09.103 10:06:22 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:09.361 10:06:22 -- target/filesystem.sh@69 -- # partprobe 00:11:09.361 10:06:22 -- target/filesystem.sh@70 -- # sleep 1 00:11:10.736 10:06:23 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:10.736 10:06:23 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:10.736 10:06:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:10.736 10:06:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.736 10:06:23 -- common/autotest_common.sh@10 -- # set +x 00:11:10.736 ************************************ 00:11:10.736 START TEST filesystem_ext4 00:11:10.736 ************************************ 00:11:10.736 10:06:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:10.736 10:06:23 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:10.736 10:06:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.736 10:06:23 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:10.736 10:06:23 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:11:10.736 10:06:23 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:11:10.736 10:06:23 -- common/autotest_common.sh@904 -- # local i=0 00:11:10.736 10:06:23 -- common/autotest_common.sh@905 -- # local force 00:11:10.736 10:06:23 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:11:10.736 10:06:23 -- common/autotest_common.sh@908 -- # force=-F 00:11:10.736 10:06:23 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:10.736 mke2fs 1.46.5 (30-Dec-2021) 00:11:10.736 Discarding device blocks: 0/522240 done 00:11:10.736 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:10.736 Filesystem UUID: aaffd2df-d331-4eac-8859-1fb6c9bba99f 00:11:10.736 Superblock backups stored on blocks: 00:11:10.736 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:10.736 00:11:10.736 Allocating group tables: 0/64 done 00:11:10.736 Writing inode tables: 0/64 done 00:11:10.736 Creating journal (8192 blocks): done 00:11:10.736 Writing superblocks and filesystem accounting information: 0/64 done 00:11:10.736 00:11:10.736 10:06:23 -- common/autotest_common.sh@921 -- # return 0 00:11:10.736 10:06:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.994 10:06:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.994 10:06:24 -- target/filesystem.sh@25 -- # sync 00:11:10.994 10:06:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.994 10:06:24 -- target/filesystem.sh@27 -- # sync 00:11:10.994 10:06:24 -- target/filesystem.sh@29 -- # i=0 00:11:10.994 10:06:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.994 10:06:24 -- target/filesystem.sh@37 -- # kill -0 176139 00:11:10.994 10:06:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.994 10:06:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.994 10:06:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.994 10:06:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.994 00:11:10.994 real 0m0.670s 00:11:10.994 user 0m0.017s 00:11:10.994 sys 0m0.071s 00:11:10.994 10:06:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.994 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:11:10.994 ************************************ 00:11:10.994 END TEST filesystem_ext4 00:11:10.994 ************************************ 00:11:11.253 10:06:24 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:11.253 10:06:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:11.253 10:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:11.253 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:11:11.253 ************************************ 00:11:11.253 START TEST filesystem_btrfs 00:11:11.253 ************************************ 00:11:11.253 10:06:24 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:11.253 10:06:24 -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:11.253 10:06:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.253 10:06:24 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:11.253 10:06:24 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:11:11.253 10:06:24 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:11:11.253 10:06:24 -- common/autotest_common.sh@904 -- # local i=0 00:11:11.253 10:06:24 -- common/autotest_common.sh@905 -- # local force 00:11:11.253 10:06:24 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:11:11.253 10:06:24 -- common/autotest_common.sh@910 -- # force=-f 00:11:11.253 10:06:24 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:11.253 btrfs-progs v6.6.2 00:11:11.253 See https://btrfs.readthedocs.io for more information. 00:11:11.253 00:11:11.253 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:11.253 NOTE: several default settings have changed in version 5.15, please make sure 00:11:11.253 this does not affect your deployments: 00:11:11.253 - DUP for metadata (-m dup) 00:11:11.253 - enabled no-holes (-O no-holes) 00:11:11.253 - enabled free-space-tree (-R free-space-tree) 00:11:11.253 00:11:11.253 Label: (null) 00:11:11.253 UUID: 2ab87f3c-d7f5-44da-9b7e-0452e9fc9061 00:11:11.253 Node size: 16384 00:11:11.253 Sector size: 4096 00:11:11.253 Filesystem size: 510.00MiB 00:11:11.253 Block group profiles: 00:11:11.253 Data: single 8.00MiB 00:11:11.253 Metadata: DUP 32.00MiB 00:11:11.253 System: DUP 8.00MiB 00:11:11.253 SSD detected: yes 00:11:11.253 Zoned device: no 00:11:11.253 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:11.253 Runtime features: free-space-tree 00:11:11.253 Checksum: crc32c 00:11:11.253 Number of devices: 1 00:11:11.253 Devices: 00:11:11.253 ID SIZE PATH 00:11:11.253 1 510.00MiB /dev/nvme0n1p1 00:11:11.253 00:11:11.253 10:06:24 -- common/autotest_common.sh@921 -- # return 0 00:11:11.253 10:06:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.188 10:06:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.188 10:06:25 -- target/filesystem.sh@25 -- # sync 00:11:12.188 10:06:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.188 10:06:25 -- target/filesystem.sh@27 -- # sync 00:11:12.188 10:06:25 -- target/filesystem.sh@29 -- # i=0 00:11:12.188 10:06:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.188 10:06:25 -- target/filesystem.sh@37 -- # kill -0 176139 00:11:12.188 10:06:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.447 10:06:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.447 10:06:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.447 10:06:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.447 00:11:12.447 real 0m1.189s 00:11:12.447 user 0m0.028s 00:11:12.447 sys 0m0.121s 00:11:12.447 10:06:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.447 10:06:25 -- common/autotest_common.sh@10 -- # set +x 00:11:12.448 ************************************ 00:11:12.448 END TEST filesystem_btrfs 00:11:12.448 ************************************ 00:11:12.448 10:06:25 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:12.448 10:06:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:12.448 10:06:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:12.448 10:06:25 -- common/autotest_common.sh@10 -- # set +x 00:11:12.448 ************************************ 00:11:12.448 START TEST filesystem_xfs 00:11:12.448 ************************************ 00:11:12.448 10:06:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:11:12.448 10:06:25 -- target/filesystem.sh@18 -- # fstype=xfs 00:11:12.448 10:06:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.448 10:06:25 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:12.448 10:06:25 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:11:12.448 10:06:25 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:11:12.448 10:06:25 -- common/autotest_common.sh@904 -- # local i=0 00:11:12.448 10:06:25 -- common/autotest_common.sh@905 -- # local force 00:11:12.448 10:06:25 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:11:12.448 10:06:25 -- common/autotest_common.sh@910 -- # force=-f 00:11:12.448 10:06:25 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:12.448 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:12.448 = sectsz=512 attr=2, projid32bit=1 00:11:12.448 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:12.448 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:12.448 data = bsize=4096 blocks=130560, imaxpct=25 00:11:12.448 = sunit=0 swidth=0 blks 00:11:12.448 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:12.448 log =internal log bsize=4096 blocks=16384, version=2 00:11:12.448 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:12.448 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:13.384 Discarding blocks...Done. 00:11:13.384 10:06:26 -- common/autotest_common.sh@921 -- # return 0 00:11:13.384 10:06:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.916 10:06:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.916 10:06:28 -- target/filesystem.sh@25 -- # sync 00:11:15.916 10:06:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.916 10:06:28 -- target/filesystem.sh@27 -- # sync 00:11:15.916 10:06:28 -- target/filesystem.sh@29 -- # i=0 00:11:15.916 10:06:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.916 10:06:28 -- target/filesystem.sh@37 -- # kill -0 176139 00:11:15.916 10:06:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.916 10:06:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.916 10:06:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.916 10:06:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.916 00:11:15.916 real 0m3.304s 00:11:15.916 user 0m0.014s 00:11:15.916 sys 0m0.079s 00:11:15.916 10:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.916 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:15.916 ************************************ 00:11:15.916 END TEST filesystem_xfs 00:11:15.916 ************************************ 00:11:15.916 10:06:28 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:15.916 10:06:28 -- target/filesystem.sh@93 -- # sync 00:11:15.916 10:06:28 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.916 10:06:29 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.916 10:06:29 -- common/autotest_common.sh@1198 -- # local i=0 00:11:15.916 10:06:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:15.916 10:06:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.916 10:06:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:15.917 10:06:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.917 10:06:29 -- common/autotest_common.sh@1210 -- # return 0 00:11:15.917 10:06:29 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.917 10:06:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.917 10:06:29 -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 10:06:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.917 10:06:29 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:15.917 10:06:29 -- target/filesystem.sh@101 -- # killprocess 176139 00:11:15.917 10:06:29 -- common/autotest_common.sh@926 -- # '[' -z 176139 ']' 00:11:15.917 10:06:29 -- common/autotest_common.sh@930 -- # kill -0 176139 00:11:15.917 10:06:29 -- common/autotest_common.sh@931 -- # uname 00:11:15.917 10:06:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:15.917 10:06:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 176139 00:11:15.917 10:06:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:15.917 10:06:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:15.917 10:06:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 176139' 00:11:15.917 killing process with pid 176139 00:11:15.917 10:06:29 -- common/autotest_common.sh@945 -- # kill 176139 00:11:15.917 10:06:29 -- common/autotest_common.sh@950 -- # wait 176139 00:11:16.484 10:06:29 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:16.484 00:11:16.484 real 0m11.616s 00:11:16.484 user 0m45.534s 00:11:16.484 sys 0m1.109s 00:11:16.484 10:06:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.484 10:06:29 -- common/autotest_common.sh@10 -- # set +x 00:11:16.484 ************************************ 00:11:16.484 END TEST nvmf_filesystem_no_in_capsule 00:11:16.484 ************************************ 00:11:16.484 10:06:29 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:16.484 10:06:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:16.484 10:06:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:16.484 10:06:29 -- common/autotest_common.sh@10 -- # set +x 00:11:16.484 ************************************ 00:11:16.484 START TEST nvmf_filesystem_in_capsule 00:11:16.484 ************************************ 00:11:16.484 10:06:29 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:11:16.484 10:06:29 -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:16.484 10:06:29 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:16.484 10:06:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:16.484 10:06:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:16.484 10:06:29 -- common/autotest_common.sh@10 -- # set +x 00:11:16.484 10:06:29 -- nvmf/common.sh@469 -- # nvmfpid=178343 00:11:16.484 10:06:29 -- nvmf/common.sh@470 -- # waitforlisten 178343 00:11:16.484 10:06:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.484 10:06:29 -- common/autotest_common.sh@819 -- # '[' -z 178343 ']' 00:11:16.484 10:06:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.484 10:06:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:16.484 10:06:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.485 10:06:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:16.485 10:06:29 -- common/autotest_common.sh@10 -- # set +x 00:11:16.485 [2024-04-24 10:06:29.597397] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:16.485 [2024-04-24 10:06:29.597443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.485 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.485 [2024-04-24 10:06:29.655620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.485 [2024-04-24 10:06:29.728923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:16.485 [2024-04-24 10:06:29.729039] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.485 [2024-04-24 10:06:29.729047] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.485 [2024-04-24 10:06:29.729053] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.485 [2024-04-24 10:06:29.729109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.485 [2024-04-24 10:06:29.729155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.485 [2024-04-24 10:06:29.729239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.485 [2024-04-24 10:06:29.729241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.419 10:06:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:17.419 10:06:30 -- common/autotest_common.sh@852 -- # return 0 00:11:17.419 10:06:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:17.419 10:06:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:17.419 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 10:06:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.419 10:06:30 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:17.419 10:06:30 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:17.419 10:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.419 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 [2024-04-24 10:06:30.443397] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.419 10:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.419 10:06:30 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:17.419 10:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.419 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 Malloc1 00:11:17.419 10:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.419 10:06:30 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.419 10:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.419 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 10:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.419 10:06:30 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.419 10:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.419 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 10:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.419 10:06:30 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.419 10:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.419 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 [2024-04-24 10:06:30.588249] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.419 10:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.419 10:06:30 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:17.419 10:06:30 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:11:17.419 10:06:30 -- common/autotest_common.sh@1358 -- # local bdev_info 00:11:17.419 10:06:30 -- common/autotest_common.sh@1359 -- # local bs 00:11:17.419 10:06:30 -- common/autotest_common.sh@1360 -- # local nb 00:11:17.419 10:06:30 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:17.419 10:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:17.419 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 10:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:17.419 10:06:30 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:11:17.419 { 00:11:17.419 "name": "Malloc1", 00:11:17.419 "aliases": [ 00:11:17.419 "e4493da2-916a-4a94-8b47-f4c8d420575e" 00:11:17.419 ], 00:11:17.419 "product_name": "Malloc disk", 00:11:17.419 "block_size": 512, 00:11:17.419 "num_blocks": 1048576, 00:11:17.419 "uuid": "e4493da2-916a-4a94-8b47-f4c8d420575e", 00:11:17.419 "assigned_rate_limits": { 00:11:17.419 "rw_ios_per_sec": 0, 00:11:17.419 "rw_mbytes_per_sec": 0, 00:11:17.419 "r_mbytes_per_sec": 0, 00:11:17.419 "w_mbytes_per_sec": 0 00:11:17.419 }, 00:11:17.419 "claimed": true, 00:11:17.419 "claim_type": "exclusive_write", 00:11:17.419 "zoned": false, 00:11:17.419 "supported_io_types": { 00:11:17.419 "read": true, 00:11:17.419 "write": true, 00:11:17.419 "unmap": true, 00:11:17.419 "write_zeroes": true, 00:11:17.419 "flush": true, 00:11:17.419 "reset": true, 00:11:17.419 "compare": false, 00:11:17.419 "compare_and_write": false, 00:11:17.419 "abort": true, 00:11:17.419 "nvme_admin": false, 00:11:17.419 "nvme_io": false 00:11:17.419 }, 00:11:17.419 "memory_domains": [ 00:11:17.419 { 00:11:17.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.419 "dma_device_type": 2 00:11:17.419 } 00:11:17.419 ], 00:11:17.419 "driver_specific": {} 00:11:17.419 } 00:11:17.419 ]' 00:11:17.419 10:06:30 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:11:17.419 10:06:30 -- common/autotest_common.sh@1362 -- # bs=512 00:11:17.419 10:06:30 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:11:17.677 10:06:30 -- common/autotest_common.sh@1363 -- # nb=1048576 00:11:17.677 10:06:30 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:11:17.677 10:06:30 -- common/autotest_common.sh@1367 -- # echo 512 00:11:17.677 10:06:30 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:17.677 10:06:30 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.610 10:06:31 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.610 10:06:31 -- common/autotest_common.sh@1177 -- # local i=0 00:11:18.610 10:06:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.611 10:06:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:18.611 10:06:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:21.142 10:06:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:21.142 10:06:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:21.142 10:06:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.142 10:06:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:21.142 10:06:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.142 10:06:33 -- common/autotest_common.sh@1187 -- # return 0 00:11:21.142 10:06:33 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:21.142 10:06:33 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:21.142 10:06:33 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:21.142 10:06:33 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:21.142 10:06:33 -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:21.142 10:06:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:21.142 10:06:33 -- setup/common.sh@80 -- # echo 536870912 00:11:21.142 10:06:33 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:21.142 10:06:33 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:21.142 10:06:33 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:21.142 10:06:33 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:21.142 10:06:33 -- target/filesystem.sh@69 -- # partprobe 00:11:21.400 10:06:34 -- target/filesystem.sh@70 -- # sleep 1 00:11:22.775 10:06:35 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:22.775 10:06:35 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:22.775 10:06:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:22.775 10:06:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.775 10:06:35 -- common/autotest_common.sh@10 -- # set +x 00:11:22.775 ************************************ 00:11:22.775 START TEST filesystem_in_capsule_ext4 00:11:22.775 ************************************ 00:11:22.775 10:06:35 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:22.775 10:06:35 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:22.775 10:06:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.775 10:06:35 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:22.775 10:06:35 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:11:22.775 10:06:35 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:11:22.775 10:06:35 -- common/autotest_common.sh@904 -- # local i=0 00:11:22.775 10:06:35 -- common/autotest_common.sh@905 -- # local force 00:11:22.775 10:06:35 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:11:22.775 10:06:35 -- common/autotest_common.sh@908 -- # force=-F 00:11:22.775 10:06:35 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:22.775 mke2fs 1.46.5 (30-Dec-2021) 00:11:22.775 Discarding device blocks: 0/522240 done 00:11:22.775 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:22.775 Filesystem UUID: 9960a991-2b60-4e92-929b-1f260b4661db 00:11:22.775 Superblock backups stored on blocks: 00:11:22.775 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:22.775 00:11:22.775 Allocating group tables: 0/64 done 00:11:22.775 Writing inode tables: 0/64 done 00:11:23.341 Creating journal (8192 blocks): done 00:11:24.166 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:11:24.166 00:11:24.166 10:06:37 -- common/autotest_common.sh@921 -- # return 0 00:11:24.166 10:06:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.102 10:06:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.102 10:06:38 -- target/filesystem.sh@25 -- # sync 00:11:25.102 10:06:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.102 10:06:38 -- target/filesystem.sh@27 -- # sync 00:11:25.102 10:06:38 -- target/filesystem.sh@29 -- # i=0 00:11:25.102 10:06:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.102 10:06:38 -- target/filesystem.sh@37 -- # kill -0 178343 00:11:25.102 10:06:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.102 10:06:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.102 10:06:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.102 10:06:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.102 00:11:25.102 real 0m2.513s 00:11:25.102 user 0m0.028s 00:11:25.102 sys 0m0.062s 00:11:25.102 10:06:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.102 10:06:38 -- common/autotest_common.sh@10 -- # set +x 00:11:25.102 ************************************ 00:11:25.102 END TEST filesystem_in_capsule_ext4 00:11:25.102 ************************************ 00:11:25.102 10:06:38 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:25.102 10:06:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:25.102 10:06:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:25.102 10:06:38 -- common/autotest_common.sh@10 -- # set +x 00:11:25.102 ************************************ 00:11:25.102 START TEST filesystem_in_capsule_btrfs 00:11:25.102 ************************************ 00:11:25.102 10:06:38 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:25.102 10:06:38 -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:25.102 10:06:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.102 10:06:38 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:25.102 10:06:38 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:11:25.102 10:06:38 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:11:25.102 10:06:38 -- common/autotest_common.sh@904 -- # local i=0 00:11:25.102 10:06:38 -- common/autotest_common.sh@905 -- # local force 00:11:25.102 10:06:38 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:11:25.102 10:06:38 -- common/autotest_common.sh@910 -- # force=-f 00:11:25.102 10:06:38 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:25.359 btrfs-progs v6.6.2 00:11:25.359 See https://btrfs.readthedocs.io for more information. 00:11:25.359 00:11:25.359 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:25.359 NOTE: several default settings have changed in version 5.15, please make sure 00:11:25.359 this does not affect your deployments: 00:11:25.359 - DUP for metadata (-m dup) 00:11:25.359 - enabled no-holes (-O no-holes) 00:11:25.359 - enabled free-space-tree (-R free-space-tree) 00:11:25.359 00:11:25.359 Label: (null) 00:11:25.359 UUID: 77d54cd4-d745-40c2-83e0-fcd263096168 00:11:25.359 Node size: 16384 00:11:25.359 Sector size: 4096 00:11:25.359 Filesystem size: 510.00MiB 00:11:25.359 Block group profiles: 00:11:25.359 Data: single 8.00MiB 00:11:25.359 Metadata: DUP 32.00MiB 00:11:25.359 System: DUP 8.00MiB 00:11:25.359 SSD detected: yes 00:11:25.359 Zoned device: no 00:11:25.359 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:25.359 Runtime features: free-space-tree 00:11:25.359 Checksum: crc32c 00:11:25.359 Number of devices: 1 00:11:25.359 Devices: 00:11:25.359 ID SIZE PATH 00:11:25.359 1 510.00MiB /dev/nvme0n1p1 00:11:25.359 00:11:25.359 10:06:38 -- common/autotest_common.sh@921 -- # return 0 00:11:25.359 10:06:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:26.293 10:06:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:26.293 10:06:39 -- target/filesystem.sh@25 -- # sync 00:11:26.293 10:06:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:26.293 10:06:39 -- target/filesystem.sh@27 -- # sync 00:11:26.293 10:06:39 -- target/filesystem.sh@29 -- # i=0 00:11:26.293 10:06:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:26.293 10:06:39 -- target/filesystem.sh@37 -- # kill -0 178343 00:11:26.293 10:06:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:26.293 10:06:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:26.293 10:06:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:26.293 10:06:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:26.293 00:11:26.293 real 0m1.184s 00:11:26.293 user 0m0.022s 00:11:26.293 sys 0m0.127s 00:11:26.293 10:06:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.293 10:06:39 -- common/autotest_common.sh@10 -- # set +x 00:11:26.293 ************************************ 00:11:26.293 END TEST filesystem_in_capsule_btrfs 00:11:26.293 ************************************ 00:11:26.293 10:06:39 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:26.293 10:06:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:26.293 10:06:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.293 10:06:39 -- common/autotest_common.sh@10 -- # set +x 00:11:26.293 ************************************ 00:11:26.293 START TEST filesystem_in_capsule_xfs 00:11:26.293 ************************************ 00:11:26.293 10:06:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:11:26.293 10:06:39 -- target/filesystem.sh@18 -- # fstype=xfs 00:11:26.293 10:06:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:26.293 10:06:39 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:26.293 10:06:39 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:11:26.293 10:06:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:11:26.293 10:06:39 -- common/autotest_common.sh@904 -- # local i=0 00:11:26.293 10:06:39 -- common/autotest_common.sh@905 -- # local force 00:11:26.293 10:06:39 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:11:26.293 10:06:39 -- common/autotest_common.sh@910 -- # force=-f 00:11:26.293 10:06:39 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:26.293 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:26.293 = sectsz=512 attr=2, projid32bit=1 00:11:26.293 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:26.293 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:26.293 data = bsize=4096 blocks=130560, imaxpct=25 00:11:26.293 = sunit=0 swidth=0 blks 00:11:26.293 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:26.293 log =internal log bsize=4096 blocks=16384, version=2 00:11:26.293 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:26.293 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:27.668 Discarding blocks...Done. 00:11:27.668 10:06:40 -- common/autotest_common.sh@921 -- # return 0 00:11:27.668 10:06:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.196 10:06:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.196 10:06:43 -- target/filesystem.sh@25 -- # sync 00:11:30.196 10:06:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.196 10:06:43 -- target/filesystem.sh@27 -- # sync 00:11:30.196 10:06:43 -- target/filesystem.sh@29 -- # i=0 00:11:30.196 10:06:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.196 10:06:43 -- target/filesystem.sh@37 -- # kill -0 178343 00:11:30.196 10:06:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.196 10:06:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.196 10:06:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.196 10:06:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.196 00:11:30.196 real 0m3.709s 00:11:30.196 user 0m0.020s 00:11:30.196 sys 0m0.075s 00:11:30.196 10:06:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.196 10:06:43 -- common/autotest_common.sh@10 -- # set +x 00:11:30.197 ************************************ 00:11:30.197 END TEST filesystem_in_capsule_xfs 00:11:30.197 ************************************ 00:11:30.197 10:06:43 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:30.197 10:06:43 -- target/filesystem.sh@93 -- # sync 00:11:30.197 10:06:43 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.197 10:06:43 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.197 10:06:43 -- common/autotest_common.sh@1198 -- # local i=0 00:11:30.197 10:06:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:30.197 10:06:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.197 10:06:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:30.197 10:06:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.197 10:06:43 -- common/autotest_common.sh@1210 -- # return 0 00:11:30.197 10:06:43 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.197 10:06:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.197 10:06:43 -- common/autotest_common.sh@10 -- # set +x 00:11:30.197 10:06:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.197 10:06:43 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:30.197 10:06:43 -- target/filesystem.sh@101 -- # killprocess 178343 00:11:30.197 10:06:43 -- common/autotest_common.sh@926 -- # '[' -z 178343 ']' 00:11:30.197 10:06:43 -- common/autotest_common.sh@930 -- # kill -0 178343 00:11:30.197 10:06:43 -- common/autotest_common.sh@931 -- # uname 00:11:30.197 10:06:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:30.197 10:06:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 178343 00:11:30.197 10:06:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:30.197 10:06:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:30.197 10:06:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 178343' 00:11:30.197 killing process with pid 178343 00:11:30.197 10:06:43 -- common/autotest_common.sh@945 -- # kill 178343 00:11:30.197 10:06:43 -- common/autotest_common.sh@950 -- # wait 178343 00:11:30.766 10:06:43 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:30.766 00:11:30.766 real 0m14.233s 00:11:30.766 user 0m55.910s 00:11:30.766 sys 0m1.185s 00:11:30.766 10:06:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.766 10:06:43 -- common/autotest_common.sh@10 -- # set +x 00:11:30.766 ************************************ 00:11:30.766 END TEST nvmf_filesystem_in_capsule 00:11:30.766 ************************************ 00:11:30.766 10:06:43 -- target/filesystem.sh@108 -- # nvmftestfini 00:11:30.766 10:06:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:30.766 10:06:43 -- nvmf/common.sh@116 -- # sync 00:11:30.766 10:06:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:30.766 10:06:43 -- nvmf/common.sh@119 -- # set +e 00:11:30.766 10:06:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:30.766 10:06:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:30.766 rmmod nvme_tcp 00:11:30.766 rmmod nvme_fabrics 00:11:30.766 rmmod nvme_keyring 00:11:30.766 10:06:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:30.766 10:06:43 -- nvmf/common.sh@123 -- # set -e 00:11:30.766 10:06:43 -- nvmf/common.sh@124 -- # return 0 00:11:30.766 10:06:43 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:11:30.766 10:06:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:30.766 10:06:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:30.766 10:06:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:30.766 10:06:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.766 10:06:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:30.766 10:06:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.766 10:06:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.766 10:06:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.694 10:06:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:32.694 00:11:32.694 real 0m33.334s 00:11:32.694 user 1m42.872s 00:11:32.694 sys 0m6.299s 00:11:32.694 10:06:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.694 10:06:45 -- common/autotest_common.sh@10 -- # set +x 00:11:32.694 ************************************ 00:11:32.694 END TEST nvmf_filesystem 00:11:32.694 ************************************ 00:11:32.694 10:06:45 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:32.694 10:06:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:32.694 10:06:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.694 10:06:45 -- common/autotest_common.sh@10 -- # set +x 00:11:32.953 ************************************ 00:11:32.953 START TEST nvmf_discovery 00:11:32.953 ************************************ 00:11:32.953 10:06:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:32.953 * Looking for test storage... 00:11:32.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.953 10:06:46 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.953 10:06:46 -- nvmf/common.sh@7 -- # uname -s 00:11:32.953 10:06:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.953 10:06:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.953 10:06:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.953 10:06:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.953 10:06:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.953 10:06:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.953 10:06:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.953 10:06:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.953 10:06:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.953 10:06:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.953 10:06:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.953 10:06:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.953 10:06:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.953 10:06:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.953 10:06:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.953 10:06:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.953 10:06:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.953 10:06:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.953 10:06:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.953 10:06:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.953 10:06:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.953 10:06:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.953 10:06:46 -- paths/export.sh@5 -- # export PATH 00:11:32.953 10:06:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.953 10:06:46 -- nvmf/common.sh@46 -- # : 0 00:11:32.953 10:06:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:32.953 10:06:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:32.953 10:06:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:32.953 10:06:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.953 10:06:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.953 10:06:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:32.953 10:06:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:32.953 10:06:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:32.953 10:06:46 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:32.953 10:06:46 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:32.953 10:06:46 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:32.953 10:06:46 -- target/discovery.sh@15 -- # hash nvme 00:11:32.953 10:06:46 -- target/discovery.sh@20 -- # nvmftestinit 00:11:32.953 10:06:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:32.953 10:06:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.953 10:06:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:32.953 10:06:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:32.953 10:06:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:32.953 10:06:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.953 10:06:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.953 10:06:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.953 10:06:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:32.953 10:06:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:32.953 10:06:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:32.953 10:06:46 -- common/autotest_common.sh@10 -- # set +x 00:11:38.226 10:06:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:38.226 10:06:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:38.226 10:06:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:38.226 10:06:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:38.226 10:06:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:38.226 10:06:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:38.226 10:06:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:38.226 10:06:51 -- nvmf/common.sh@294 -- # net_devs=() 00:11:38.226 10:06:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:38.226 10:06:51 -- nvmf/common.sh@295 -- # e810=() 00:11:38.226 10:06:51 -- nvmf/common.sh@295 -- # local -ga e810 00:11:38.226 10:06:51 -- nvmf/common.sh@296 -- # x722=() 00:11:38.226 10:06:51 -- nvmf/common.sh@296 -- # local -ga x722 00:11:38.226 10:06:51 -- nvmf/common.sh@297 -- # mlx=() 00:11:38.226 10:06:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:38.226 10:06:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.226 10:06:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:38.226 10:06:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:38.226 10:06:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:38.226 10:06:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:38.226 10:06:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:38.226 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:38.226 10:06:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:38.226 10:06:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:38.226 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:38.226 10:06:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:38.226 10:06:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:38.226 10:06:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.226 10:06:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:38.226 10:06:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.226 10:06:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:38.226 Found net devices under 0000:86:00.0: cvl_0_0 00:11:38.226 10:06:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.226 10:06:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:38.226 10:06:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.226 10:06:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:38.226 10:06:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.226 10:06:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:38.226 Found net devices under 0000:86:00.1: cvl_0_1 00:11:38.226 10:06:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.226 10:06:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:38.226 10:06:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:38.226 10:06:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:38.226 10:06:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.226 10:06:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.226 10:06:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.226 10:06:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:38.226 10:06:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.226 10:06:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.226 10:06:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:38.226 10:06:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.226 10:06:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.226 10:06:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:38.226 10:06:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:38.226 10:06:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.226 10:06:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.226 10:06:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.226 10:06:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.226 10:06:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:38.226 10:06:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.226 10:06:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.226 10:06:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.226 10:06:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:38.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:11:38.226 00:11:38.226 --- 10.0.0.2 ping statistics --- 00:11:38.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.226 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:11:38.226 10:06:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:11:38.226 00:11:38.226 --- 10.0.0.1 ping statistics --- 00:11:38.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.226 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:11:38.226 10:06:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.226 10:06:51 -- nvmf/common.sh@410 -- # return 0 00:11:38.226 10:06:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:38.226 10:06:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.226 10:06:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:38.226 10:06:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.226 10:06:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:38.226 10:06:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:38.226 10:06:51 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:38.226 10:06:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:38.226 10:06:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:38.226 10:06:51 -- common/autotest_common.sh@10 -- # set +x 00:11:38.226 10:06:51 -- nvmf/common.sh@469 -- # nvmfpid=184330 00:11:38.226 10:06:51 -- nvmf/common.sh@470 -- # waitforlisten 184330 00:11:38.226 10:06:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.226 10:06:51 -- common/autotest_common.sh@819 -- # '[' -z 184330 ']' 00:11:38.226 10:06:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.226 10:06:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:38.226 10:06:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.226 10:06:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:38.227 10:06:51 -- common/autotest_common.sh@10 -- # set +x 00:11:38.227 [2024-04-24 10:06:51.409848] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:38.227 [2024-04-24 10:06:51.409892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.227 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.227 [2024-04-24 10:06:51.467140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.492 [2024-04-24 10:06:51.548151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:38.492 [2024-04-24 10:06:51.548257] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.492 [2024-04-24 10:06:51.548266] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.492 [2024-04-24 10:06:51.548274] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.492 [2024-04-24 10:06:51.548315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.492 [2024-04-24 10:06:51.548335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.492 [2024-04-24 10:06:51.548423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.492 [2024-04-24 10:06:51.548425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.104 10:06:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:39.104 10:06:52 -- common/autotest_common.sh@852 -- # return 0 00:11:39.104 10:06:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:39.104 10:06:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:39.104 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 10:06:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.104 10:06:52 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.104 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.104 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 [2024-04-24 10:06:52.259364] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.104 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.104 10:06:52 -- target/discovery.sh@26 -- # seq 1 4 00:11:39.104 10:06:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.104 10:06:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:39.104 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.104 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 Null1 00:11:39.104 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.104 10:06:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:39.104 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.104 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.104 10:06:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:39.104 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.104 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.104 10:06:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.104 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.104 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 [2024-04-24 10:06:52.304791] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.104 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.104 10:06:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.104 10:06:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:39.104 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.104 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 Null2 00:11:39.104 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.104 10:06:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:39.104 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.104 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.104 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.105 10:06:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:39.105 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.105 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.105 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.105 10:06:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:39.105 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.105 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.105 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.105 10:06:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.105 10:06:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:39.105 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.105 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.105 Null3 00:11:39.105 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.105 10:06:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:39.105 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.105 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.105 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.105 10:06:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:39.105 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.105 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.105 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.105 10:06:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:39.105 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.105 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.105 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.105 10:06:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.105 10:06:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:39.105 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.105 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.105 Null4 00:11:39.105 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.105 10:06:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:39.105 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.105 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.362 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.362 10:06:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:39.362 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.362 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.362 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.362 10:06:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:39.362 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.362 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.362 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.362 10:06:52 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.362 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.362 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.362 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.362 10:06:52 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:39.362 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.362 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.362 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.362 10:06:52 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:39.362 00:11:39.362 Discovery Log Number of Records 6, Generation counter 6 00:11:39.362 =====Discovery Log Entry 0====== 00:11:39.362 trtype: tcp 00:11:39.362 adrfam: ipv4 00:11:39.362 subtype: current discovery subsystem 00:11:39.362 treq: not required 00:11:39.362 portid: 0 00:11:39.362 trsvcid: 4420 00:11:39.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:39.362 traddr: 10.0.0.2 00:11:39.362 eflags: explicit discovery connections, duplicate discovery information 00:11:39.362 sectype: none 00:11:39.362 =====Discovery Log Entry 1====== 00:11:39.362 trtype: tcp 00:11:39.362 adrfam: ipv4 00:11:39.362 subtype: nvme subsystem 00:11:39.362 treq: not required 00:11:39.362 portid: 0 00:11:39.362 trsvcid: 4420 00:11:39.362 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:39.362 traddr: 10.0.0.2 00:11:39.362 eflags: none 00:11:39.362 sectype: none 00:11:39.362 =====Discovery Log Entry 2====== 00:11:39.362 trtype: tcp 00:11:39.362 adrfam: ipv4 00:11:39.362 subtype: nvme subsystem 00:11:39.362 treq: not required 00:11:39.362 portid: 0 00:11:39.362 trsvcid: 4420 00:11:39.362 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:39.362 traddr: 10.0.0.2 00:11:39.362 eflags: none 00:11:39.362 sectype: none 00:11:39.362 =====Discovery Log Entry 3====== 00:11:39.362 trtype: tcp 00:11:39.362 adrfam: ipv4 00:11:39.362 subtype: nvme subsystem 00:11:39.362 treq: not required 00:11:39.362 portid: 0 00:11:39.362 trsvcid: 4420 00:11:39.362 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:39.362 traddr: 10.0.0.2 00:11:39.362 eflags: none 00:11:39.362 sectype: none 00:11:39.362 =====Discovery Log Entry 4====== 00:11:39.362 trtype: tcp 00:11:39.362 adrfam: ipv4 00:11:39.362 subtype: nvme subsystem 00:11:39.362 treq: not required 00:11:39.362 portid: 0 00:11:39.362 trsvcid: 4420 00:11:39.362 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:39.362 traddr: 10.0.0.2 00:11:39.362 eflags: none 00:11:39.362 sectype: none 00:11:39.362 =====Discovery Log Entry 5====== 00:11:39.362 trtype: tcp 00:11:39.362 adrfam: ipv4 00:11:39.362 subtype: discovery subsystem referral 00:11:39.362 treq: not required 00:11:39.362 portid: 0 00:11:39.362 trsvcid: 4430 00:11:39.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:39.362 traddr: 10.0.0.2 00:11:39.362 eflags: none 00:11:39.362 sectype: none 00:11:39.362 10:06:52 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:39.362 Perform nvmf subsystem discovery via RPC 00:11:39.362 10:06:52 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:39.362 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.362 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.362 [2024-04-24 10:06:52.593679] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:11:39.362 [ 00:11:39.362 { 00:11:39.362 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:39.362 "subtype": "Discovery", 00:11:39.362 "listen_addresses": [ 00:11:39.362 { 00:11:39.362 "transport": "TCP", 00:11:39.362 "trtype": "TCP", 00:11:39.362 "adrfam": "IPv4", 00:11:39.362 "traddr": "10.0.0.2", 00:11:39.362 "trsvcid": "4420" 00:11:39.362 } 00:11:39.362 ], 00:11:39.362 "allow_any_host": true, 00:11:39.362 "hosts": [] 00:11:39.362 }, 00:11:39.362 { 00:11:39.362 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.362 "subtype": "NVMe", 00:11:39.362 "listen_addresses": [ 00:11:39.362 { 00:11:39.362 "transport": "TCP", 00:11:39.362 "trtype": "TCP", 00:11:39.362 "adrfam": "IPv4", 00:11:39.362 "traddr": "10.0.0.2", 00:11:39.362 "trsvcid": "4420" 00:11:39.362 } 00:11:39.362 ], 00:11:39.362 "allow_any_host": true, 00:11:39.362 "hosts": [], 00:11:39.362 "serial_number": "SPDK00000000000001", 00:11:39.362 "model_number": "SPDK bdev Controller", 00:11:39.362 "max_namespaces": 32, 00:11:39.362 "min_cntlid": 1, 00:11:39.362 "max_cntlid": 65519, 00:11:39.362 "namespaces": [ 00:11:39.362 { 00:11:39.362 "nsid": 1, 00:11:39.362 "bdev_name": "Null1", 00:11:39.362 "name": "Null1", 00:11:39.362 "nguid": "94A4308A69054469B7EE4BFC2554288B", 00:11:39.362 "uuid": "94a4308a-6905-4469-b7ee-4bfc2554288b" 00:11:39.362 } 00:11:39.362 ] 00:11:39.362 }, 00:11:39.362 { 00:11:39.362 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:39.362 "subtype": "NVMe", 00:11:39.362 "listen_addresses": [ 00:11:39.362 { 00:11:39.362 "transport": "TCP", 00:11:39.362 "trtype": "TCP", 00:11:39.362 "adrfam": "IPv4", 00:11:39.362 "traddr": "10.0.0.2", 00:11:39.362 "trsvcid": "4420" 00:11:39.362 } 00:11:39.363 ], 00:11:39.363 "allow_any_host": true, 00:11:39.363 "hosts": [], 00:11:39.363 "serial_number": "SPDK00000000000002", 00:11:39.363 "model_number": "SPDK bdev Controller", 00:11:39.363 "max_namespaces": 32, 00:11:39.363 "min_cntlid": 1, 00:11:39.363 "max_cntlid": 65519, 00:11:39.363 "namespaces": [ 00:11:39.363 { 00:11:39.363 "nsid": 1, 00:11:39.363 "bdev_name": "Null2", 00:11:39.363 "name": "Null2", 00:11:39.363 "nguid": "E9BA6F29978243E7BF2D661018E2FF07", 00:11:39.363 "uuid": "e9ba6f29-9782-43e7-bf2d-661018e2ff07" 00:11:39.363 } 00:11:39.363 ] 00:11:39.363 }, 00:11:39.363 { 00:11:39.363 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:39.363 "subtype": "NVMe", 00:11:39.363 "listen_addresses": [ 00:11:39.363 { 00:11:39.363 "transport": "TCP", 00:11:39.363 "trtype": "TCP", 00:11:39.363 "adrfam": "IPv4", 00:11:39.363 "traddr": "10.0.0.2", 00:11:39.363 "trsvcid": "4420" 00:11:39.363 } 00:11:39.363 ], 00:11:39.363 "allow_any_host": true, 00:11:39.363 "hosts": [], 00:11:39.363 "serial_number": "SPDK00000000000003", 00:11:39.363 "model_number": "SPDK bdev Controller", 00:11:39.363 "max_namespaces": 32, 00:11:39.363 "min_cntlid": 1, 00:11:39.363 "max_cntlid": 65519, 00:11:39.363 "namespaces": [ 00:11:39.363 { 00:11:39.363 "nsid": 1, 00:11:39.363 "bdev_name": "Null3", 00:11:39.363 "name": "Null3", 00:11:39.363 "nguid": "37335E0564D04D85BDE46EC00FC7AE1F", 00:11:39.363 "uuid": "37335e05-64d0-4d85-bde4-6ec00fc7ae1f" 00:11:39.363 } 00:11:39.363 ] 00:11:39.363 }, 00:11:39.363 { 00:11:39.363 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:39.363 "subtype": "NVMe", 00:11:39.363 "listen_addresses": [ 00:11:39.363 { 00:11:39.363 "transport": "TCP", 00:11:39.363 "trtype": "TCP", 00:11:39.363 "adrfam": "IPv4", 00:11:39.363 "traddr": "10.0.0.2", 00:11:39.363 "trsvcid": "4420" 00:11:39.363 } 00:11:39.363 ], 00:11:39.363 "allow_any_host": true, 00:11:39.363 "hosts": [], 00:11:39.363 "serial_number": "SPDK00000000000004", 00:11:39.363 "model_number": "SPDK bdev Controller", 00:11:39.363 "max_namespaces": 32, 00:11:39.363 "min_cntlid": 1, 00:11:39.363 "max_cntlid": 65519, 00:11:39.363 "namespaces": [ 00:11:39.363 { 00:11:39.363 "nsid": 1, 00:11:39.363 "bdev_name": "Null4", 00:11:39.363 "name": "Null4", 00:11:39.363 "nguid": "47A7187BEAD84B579482091E13701E19", 00:11:39.363 "uuid": "47a7187b-ead8-4b57-9482-091e13701e19" 00:11:39.363 } 00:11:39.363 ] 00:11:39.363 } 00:11:39.363 ] 00:11:39.363 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.363 10:06:52 -- target/discovery.sh@42 -- # seq 1 4 00:11:39.363 10:06:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.363 10:06:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.363 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.363 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.363 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.363 10:06:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:39.363 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.363 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.363 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.363 10:06:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.363 10:06:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:39.363 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.363 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.624 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.624 10:06:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:39.624 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.624 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.624 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.624 10:06:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.624 10:06:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:39.624 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.624 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.624 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.624 10:06:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:39.624 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.624 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.624 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.624 10:06:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.624 10:06:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:39.624 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.624 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.624 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.624 10:06:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:39.624 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.624 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.624 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.624 10:06:52 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:39.624 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.624 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.624 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.624 10:06:52 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:39.624 10:06:52 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:39.624 10:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.624 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:39.624 10:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.624 10:06:52 -- target/discovery.sh@49 -- # check_bdevs= 00:11:39.624 10:06:52 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:39.624 10:06:52 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:39.624 10:06:52 -- target/discovery.sh@57 -- # nvmftestfini 00:11:39.624 10:06:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:39.624 10:06:52 -- nvmf/common.sh@116 -- # sync 00:11:39.624 10:06:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:39.624 10:06:52 -- nvmf/common.sh@119 -- # set +e 00:11:39.624 10:06:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:39.624 10:06:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:39.624 rmmod nvme_tcp 00:11:39.624 rmmod nvme_fabrics 00:11:39.624 rmmod nvme_keyring 00:11:39.624 10:06:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:39.624 10:06:52 -- nvmf/common.sh@123 -- # set -e 00:11:39.624 10:06:52 -- nvmf/common.sh@124 -- # return 0 00:11:39.624 10:06:52 -- nvmf/common.sh@477 -- # '[' -n 184330 ']' 00:11:39.624 10:06:52 -- nvmf/common.sh@478 -- # killprocess 184330 00:11:39.624 10:06:52 -- common/autotest_common.sh@926 -- # '[' -z 184330 ']' 00:11:39.624 10:06:52 -- common/autotest_common.sh@930 -- # kill -0 184330 00:11:39.624 10:06:52 -- common/autotest_common.sh@931 -- # uname 00:11:39.624 10:06:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:39.624 10:06:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 184330 00:11:39.624 10:06:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:39.624 10:06:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:39.624 10:06:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 184330' 00:11:39.624 killing process with pid 184330 00:11:39.624 10:06:52 -- common/autotest_common.sh@945 -- # kill 184330 00:11:39.624 [2024-04-24 10:06:52.839667] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:11:39.624 10:06:52 -- common/autotest_common.sh@950 -- # wait 184330 00:11:39.883 10:06:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:39.883 10:06:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:39.883 10:06:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:39.883 10:06:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.883 10:06:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:39.883 10:06:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.883 10:06:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.883 10:06:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.420 10:06:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:42.420 00:11:42.420 real 0m9.138s 00:11:42.420 user 0m7.583s 00:11:42.420 sys 0m4.299s 00:11:42.420 10:06:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.420 10:06:55 -- common/autotest_common.sh@10 -- # set +x 00:11:42.420 ************************************ 00:11:42.420 END TEST nvmf_discovery 00:11:42.420 ************************************ 00:11:42.420 10:06:55 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:42.420 10:06:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:42.420 10:06:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.420 10:06:55 -- common/autotest_common.sh@10 -- # set +x 00:11:42.420 ************************************ 00:11:42.420 START TEST nvmf_referrals 00:11:42.420 ************************************ 00:11:42.420 10:06:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:42.420 * Looking for test storage... 00:11:42.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.420 10:06:55 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.420 10:06:55 -- nvmf/common.sh@7 -- # uname -s 00:11:42.420 10:06:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.420 10:06:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.420 10:06:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.420 10:06:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.420 10:06:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.420 10:06:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.420 10:06:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.420 10:06:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.420 10:06:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.420 10:06:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.420 10:06:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:42.420 10:06:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:42.420 10:06:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.420 10:06:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.420 10:06:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.420 10:06:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.420 10:06:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.420 10:06:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.420 10:06:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.420 10:06:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.420 10:06:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.420 10:06:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.420 10:06:55 -- paths/export.sh@5 -- # export PATH 00:11:42.420 10:06:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.420 10:06:55 -- nvmf/common.sh@46 -- # : 0 00:11:42.420 10:06:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:42.420 10:06:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:42.420 10:06:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:42.420 10:06:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.420 10:06:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.420 10:06:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:42.420 10:06:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:42.420 10:06:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:42.420 10:06:55 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:42.420 10:06:55 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:42.420 10:06:55 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:42.420 10:06:55 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:42.420 10:06:55 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:42.420 10:06:55 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:42.420 10:06:55 -- target/referrals.sh@37 -- # nvmftestinit 00:11:42.420 10:06:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:42.420 10:06:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.420 10:06:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:42.420 10:06:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:42.420 10:06:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:42.420 10:06:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.420 10:06:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.420 10:06:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.420 10:06:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:42.420 10:06:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:42.420 10:06:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:42.420 10:06:55 -- common/autotest_common.sh@10 -- # set +x 00:11:47.692 10:07:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:47.692 10:07:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:47.692 10:07:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:47.692 10:07:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:47.692 10:07:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:47.692 10:07:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:47.692 10:07:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:47.692 10:07:00 -- nvmf/common.sh@294 -- # net_devs=() 00:11:47.692 10:07:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:47.692 10:07:00 -- nvmf/common.sh@295 -- # e810=() 00:11:47.692 10:07:00 -- nvmf/common.sh@295 -- # local -ga e810 00:11:47.692 10:07:00 -- nvmf/common.sh@296 -- # x722=() 00:11:47.692 10:07:00 -- nvmf/common.sh@296 -- # local -ga x722 00:11:47.692 10:07:00 -- nvmf/common.sh@297 -- # mlx=() 00:11:47.692 10:07:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:47.692 10:07:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.692 10:07:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:47.692 10:07:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:47.692 10:07:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:47.692 10:07:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:47.692 10:07:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:47.692 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:47.692 10:07:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:47.692 10:07:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:47.692 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:47.692 10:07:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:47.692 10:07:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:47.692 10:07:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.692 10:07:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:47.692 10:07:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.692 10:07:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:47.692 Found net devices under 0000:86:00.0: cvl_0_0 00:11:47.692 10:07:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.692 10:07:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:47.692 10:07:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.692 10:07:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:47.692 10:07:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.692 10:07:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:47.692 Found net devices under 0000:86:00.1: cvl_0_1 00:11:47.692 10:07:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.692 10:07:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:47.692 10:07:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:47.692 10:07:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:47.692 10:07:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.692 10:07:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.692 10:07:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.692 10:07:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:47.692 10:07:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.692 10:07:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.692 10:07:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:47.692 10:07:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.692 10:07:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.692 10:07:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:47.692 10:07:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:47.692 10:07:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.692 10:07:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.692 10:07:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.692 10:07:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.692 10:07:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:47.692 10:07:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.692 10:07:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.692 10:07:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.692 10:07:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:47.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:11:47.692 00:11:47.692 --- 10.0.0.2 ping statistics --- 00:11:47.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.692 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:47.692 10:07:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:11:47.692 00:11:47.692 --- 10.0.0.1 ping statistics --- 00:11:47.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.692 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:11:47.692 10:07:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.692 10:07:00 -- nvmf/common.sh@410 -- # return 0 00:11:47.692 10:07:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:47.692 10:07:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.692 10:07:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:47.692 10:07:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.692 10:07:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:47.692 10:07:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:47.692 10:07:00 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:47.692 10:07:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:47.692 10:07:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:47.692 10:07:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.692 10:07:00 -- nvmf/common.sh@469 -- # nvmfpid=188133 00:11:47.692 10:07:00 -- nvmf/common.sh@470 -- # waitforlisten 188133 00:11:47.692 10:07:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.693 10:07:00 -- common/autotest_common.sh@819 -- # '[' -z 188133 ']' 00:11:47.693 10:07:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.693 10:07:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:47.693 10:07:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.693 10:07:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:47.693 10:07:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.693 [2024-04-24 10:07:00.684176] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:47.693 [2024-04-24 10:07:00.684226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.693 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.693 [2024-04-24 10:07:00.742754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.693 [2024-04-24 10:07:00.823399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:47.693 [2024-04-24 10:07:00.823508] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.693 [2024-04-24 10:07:00.823516] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.693 [2024-04-24 10:07:00.823523] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.693 [2024-04-24 10:07:00.823562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.693 [2024-04-24 10:07:00.823655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.693 [2024-04-24 10:07:00.823739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.693 [2024-04-24 10:07:00.823741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.260 10:07:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:48.260 10:07:01 -- common/autotest_common.sh@852 -- # return 0 00:11:48.260 10:07:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:48.260 10:07:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:48.260 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.260 10:07:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.260 10:07:01 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.260 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.260 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.260 [2024-04-24 10:07:01.536416] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.519 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.519 10:07:01 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:48.519 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.519 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.519 [2024-04-24 10:07:01.549813] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:48.519 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.519 10:07:01 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:48.519 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.519 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.519 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.519 10:07:01 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:48.519 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.519 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.519 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.519 10:07:01 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:48.519 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.519 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.519 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.519 10:07:01 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:48.519 10:07:01 -- target/referrals.sh@48 -- # jq length 00:11:48.519 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.519 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.519 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.519 10:07:01 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:48.519 10:07:01 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:48.519 10:07:01 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:48.519 10:07:01 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:48.519 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.519 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.519 10:07:01 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:48.519 10:07:01 -- target/referrals.sh@21 -- # sort 00:11:48.519 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.519 10:07:01 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:48.519 10:07:01 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:48.519 10:07:01 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:48.519 10:07:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:48.519 10:07:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:48.519 10:07:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.519 10:07:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:48.519 10:07:01 -- target/referrals.sh@26 -- # sort 00:11:48.519 10:07:01 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:48.519 10:07:01 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:48.519 10:07:01 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:48.519 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.519 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.778 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.778 10:07:01 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:48.778 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.778 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.778 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.778 10:07:01 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:48.778 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.778 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.778 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.778 10:07:01 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:48.778 10:07:01 -- target/referrals.sh@56 -- # jq length 00:11:48.778 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.778 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.778 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.778 10:07:01 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:48.778 10:07:01 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:48.778 10:07:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:48.778 10:07:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:48.778 10:07:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.778 10:07:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:48.778 10:07:01 -- target/referrals.sh@26 -- # sort 00:11:48.778 10:07:01 -- target/referrals.sh@26 -- # echo 00:11:48.778 10:07:01 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:48.778 10:07:01 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:48.778 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.778 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.778 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.778 10:07:01 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:48.778 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.778 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.778 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.778 10:07:01 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:48.778 10:07:01 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:48.778 10:07:01 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:48.778 10:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.778 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:11:48.778 10:07:01 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:48.778 10:07:01 -- target/referrals.sh@21 -- # sort 00:11:48.778 10:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.778 10:07:02 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:48.778 10:07:02 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:48.778 10:07:02 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:48.778 10:07:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:48.778 10:07:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:48.778 10:07:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:48.778 10:07:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.778 10:07:02 -- target/referrals.sh@26 -- # sort 00:11:49.037 10:07:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:49.037 10:07:02 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:49.037 10:07:02 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:49.037 10:07:02 -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:49.037 10:07:02 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:49.037 10:07:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:49.037 10:07:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:49.037 10:07:02 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:49.037 10:07:02 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:49.037 10:07:02 -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:49.037 10:07:02 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:49.037 10:07:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:49.037 10:07:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:49.302 10:07:02 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:49.302 10:07:02 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:49.302 10:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.302 10:07:02 -- common/autotest_common.sh@10 -- # set +x 00:11:49.302 10:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.302 10:07:02 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:49.302 10:07:02 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:49.302 10:07:02 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:49.302 10:07:02 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:49.302 10:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.302 10:07:02 -- target/referrals.sh@21 -- # sort 00:11:49.302 10:07:02 -- common/autotest_common.sh@10 -- # set +x 00:11:49.302 10:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.302 10:07:02 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:49.302 10:07:02 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:49.302 10:07:02 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:49.302 10:07:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:49.302 10:07:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:49.302 10:07:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:49.302 10:07:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:49.302 10:07:02 -- target/referrals.sh@26 -- # sort 00:11:49.562 10:07:02 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:49.562 10:07:02 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:49.562 10:07:02 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:49.562 10:07:02 -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:49.562 10:07:02 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:49.562 10:07:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:49.562 10:07:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:49.562 10:07:02 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:49.562 10:07:02 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:49.562 10:07:02 -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:49.562 10:07:02 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:49.562 10:07:02 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:49.562 10:07:02 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:49.820 10:07:02 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:49.820 10:07:02 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:49.820 10:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.820 10:07:02 -- common/autotest_common.sh@10 -- # set +x 00:11:49.820 10:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.820 10:07:02 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:49.820 10:07:02 -- target/referrals.sh@82 -- # jq length 00:11:49.820 10:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:49.820 10:07:02 -- common/autotest_common.sh@10 -- # set +x 00:11:49.820 10:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:49.820 10:07:02 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:49.820 10:07:02 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:49.820 10:07:02 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:49.820 10:07:02 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:49.820 10:07:02 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:49.820 10:07:02 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:49.820 10:07:02 -- target/referrals.sh@26 -- # sort 00:11:49.820 10:07:03 -- target/referrals.sh@26 -- # echo 00:11:49.820 10:07:03 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:49.820 10:07:03 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:49.820 10:07:03 -- target/referrals.sh@86 -- # nvmftestfini 00:11:49.820 10:07:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:49.820 10:07:03 -- nvmf/common.sh@116 -- # sync 00:11:49.820 10:07:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:49.820 10:07:03 -- nvmf/common.sh@119 -- # set +e 00:11:49.820 10:07:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:49.820 10:07:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:49.820 rmmod nvme_tcp 00:11:49.820 rmmod nvme_fabrics 00:11:49.820 rmmod nvme_keyring 00:11:49.820 10:07:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:49.820 10:07:03 -- nvmf/common.sh@123 -- # set -e 00:11:50.079 10:07:03 -- nvmf/common.sh@124 -- # return 0 00:11:50.079 10:07:03 -- nvmf/common.sh@477 -- # '[' -n 188133 ']' 00:11:50.079 10:07:03 -- nvmf/common.sh@478 -- # killprocess 188133 00:11:50.079 10:07:03 -- common/autotest_common.sh@926 -- # '[' -z 188133 ']' 00:11:50.079 10:07:03 -- common/autotest_common.sh@930 -- # kill -0 188133 00:11:50.079 10:07:03 -- common/autotest_common.sh@931 -- # uname 00:11:50.079 10:07:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:50.079 10:07:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 188133 00:11:50.079 10:07:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:50.079 10:07:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:50.079 10:07:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 188133' 00:11:50.079 killing process with pid 188133 00:11:50.079 10:07:03 -- common/autotest_common.sh@945 -- # kill 188133 00:11:50.079 10:07:03 -- common/autotest_common.sh@950 -- # wait 188133 00:11:50.079 10:07:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:50.079 10:07:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:50.079 10:07:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:50.079 10:07:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.079 10:07:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:50.079 10:07:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.079 10:07:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.079 10:07:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.617 10:07:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:52.617 00:11:52.617 real 0m10.258s 00:11:52.617 user 0m12.375s 00:11:52.617 sys 0m4.691s 00:11:52.617 10:07:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.617 10:07:05 -- common/autotest_common.sh@10 -- # set +x 00:11:52.617 ************************************ 00:11:52.617 END TEST nvmf_referrals 00:11:52.617 ************************************ 00:11:52.617 10:07:05 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:52.617 10:07:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:52.617 10:07:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:52.617 10:07:05 -- common/autotest_common.sh@10 -- # set +x 00:11:52.617 ************************************ 00:11:52.617 START TEST nvmf_connect_disconnect 00:11:52.617 ************************************ 00:11:52.617 10:07:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:52.617 * Looking for test storage... 00:11:52.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.617 10:07:05 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.617 10:07:05 -- nvmf/common.sh@7 -- # uname -s 00:11:52.617 10:07:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.617 10:07:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.617 10:07:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.617 10:07:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.617 10:07:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.617 10:07:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.617 10:07:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.617 10:07:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.617 10:07:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.617 10:07:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.617 10:07:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:52.617 10:07:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:52.617 10:07:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.617 10:07:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.617 10:07:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.617 10:07:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.617 10:07:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.617 10:07:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.617 10:07:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.617 10:07:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.617 10:07:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.617 10:07:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.617 10:07:05 -- paths/export.sh@5 -- # export PATH 00:11:52.617 10:07:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.617 10:07:05 -- nvmf/common.sh@46 -- # : 0 00:11:52.617 10:07:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:52.617 10:07:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:52.617 10:07:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:52.617 10:07:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.617 10:07:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.617 10:07:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:52.617 10:07:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:52.617 10:07:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:52.617 10:07:05 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.617 10:07:05 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.617 10:07:05 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:52.617 10:07:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:52.617 10:07:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.617 10:07:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:52.617 10:07:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:52.617 10:07:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:52.617 10:07:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.617 10:07:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.617 10:07:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.617 10:07:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:52.617 10:07:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:52.617 10:07:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:52.617 10:07:05 -- common/autotest_common.sh@10 -- # set +x 00:11:57.894 10:07:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:57.894 10:07:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:57.894 10:07:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:57.894 10:07:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:57.894 10:07:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:57.894 10:07:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:57.894 10:07:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:57.894 10:07:10 -- nvmf/common.sh@294 -- # net_devs=() 00:11:57.894 10:07:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:57.894 10:07:10 -- nvmf/common.sh@295 -- # e810=() 00:11:57.894 10:07:10 -- nvmf/common.sh@295 -- # local -ga e810 00:11:57.894 10:07:10 -- nvmf/common.sh@296 -- # x722=() 00:11:57.894 10:07:10 -- nvmf/common.sh@296 -- # local -ga x722 00:11:57.894 10:07:10 -- nvmf/common.sh@297 -- # mlx=() 00:11:57.894 10:07:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:57.894 10:07:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.894 10:07:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:57.894 10:07:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:57.894 10:07:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:57.894 10:07:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:57.894 10:07:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:57.894 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:57.894 10:07:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:57.894 10:07:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:57.894 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:57.894 10:07:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:57.894 10:07:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:57.894 10:07:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.894 10:07:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:57.894 10:07:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.894 10:07:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:57.894 Found net devices under 0000:86:00.0: cvl_0_0 00:11:57.894 10:07:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.894 10:07:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:57.894 10:07:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.894 10:07:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:57.894 10:07:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.894 10:07:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:57.894 Found net devices under 0000:86:00.1: cvl_0_1 00:11:57.894 10:07:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.894 10:07:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:57.894 10:07:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:57.894 10:07:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:57.894 10:07:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.894 10:07:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.894 10:07:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.894 10:07:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:57.894 10:07:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.894 10:07:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.894 10:07:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:57.894 10:07:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.894 10:07:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.894 10:07:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:57.894 10:07:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:57.894 10:07:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.894 10:07:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.894 10:07:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.894 10:07:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.894 10:07:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:57.894 10:07:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.894 10:07:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.894 10:07:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.894 10:07:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:57.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:11:57.894 00:11:57.894 --- 10.0.0.2 ping statistics --- 00:11:57.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.894 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:11:57.894 10:07:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:11:57.894 00:11:57.894 --- 10.0.0.1 ping statistics --- 00:11:57.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.894 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:11:57.894 10:07:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.894 10:07:10 -- nvmf/common.sh@410 -- # return 0 00:11:57.894 10:07:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:57.894 10:07:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.894 10:07:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:57.894 10:07:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.894 10:07:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:57.894 10:07:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:57.894 10:07:10 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:57.894 10:07:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:57.894 10:07:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:57.894 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.894 10:07:10 -- nvmf/common.sh@469 -- # nvmfpid=192017 00:11:57.894 10:07:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.894 10:07:10 -- nvmf/common.sh@470 -- # waitforlisten 192017 00:11:57.894 10:07:10 -- common/autotest_common.sh@819 -- # '[' -z 192017 ']' 00:11:57.894 10:07:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.894 10:07:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:57.894 10:07:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.894 10:07:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:57.894 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.894 [2024-04-24 10:07:10.878209] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:57.894 [2024-04-24 10:07:10.878255] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.894 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.894 [2024-04-24 10:07:10.936602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.895 [2024-04-24 10:07:11.007499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:57.895 [2024-04-24 10:07:11.007633] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.895 [2024-04-24 10:07:11.007641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.895 [2024-04-24 10:07:11.007647] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.895 [2024-04-24 10:07:11.007695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.895 [2024-04-24 10:07:11.007802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.895 [2024-04-24 10:07:11.007889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.895 [2024-04-24 10:07:11.007891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.461 10:07:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:58.462 10:07:11 -- common/autotest_common.sh@852 -- # return 0 00:11:58.462 10:07:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:58.462 10:07:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:58.462 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 10:07:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.462 10:07:11 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:58.462 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.462 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 [2024-04-24 10:07:11.720321] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.462 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.462 10:07:11 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:58.462 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.462 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:11:58.721 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.721 10:07:11 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:58.721 10:07:11 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.721 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.721 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:11:58.721 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.721 10:07:11 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.721 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.721 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:11:58.721 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.721 10:07:11 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.721 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.721 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:11:58.721 [2024-04-24 10:07:11.772220] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.721 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.721 10:07:11 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:58.721 10:07:11 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:58.721 10:07:11 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:58.721 10:07:11 -- target/connect_disconnect.sh@34 -- # set +x 00:12:01.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.264 10:11:02 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:49.264 10:11:02 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:49.264 10:11:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:49.264 10:11:02 -- nvmf/common.sh@116 -- # sync 00:15:49.264 10:11:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:49.264 10:11:02 -- nvmf/common.sh@119 -- # set +e 00:15:49.264 10:11:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:49.264 10:11:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:49.264 rmmod nvme_tcp 00:15:49.264 rmmod nvme_fabrics 00:15:49.264 rmmod nvme_keyring 00:15:49.264 10:11:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:49.264 10:11:02 -- nvmf/common.sh@123 -- # set -e 00:15:49.264 10:11:02 -- nvmf/common.sh@124 -- # return 0 00:15:49.264 10:11:02 -- nvmf/common.sh@477 -- # '[' -n 192017 ']' 00:15:49.264 10:11:02 -- nvmf/common.sh@478 -- # killprocess 192017 00:15:49.264 10:11:02 -- common/autotest_common.sh@926 -- # '[' -z 192017 ']' 00:15:49.264 10:11:02 -- common/autotest_common.sh@930 -- # kill -0 192017 00:15:49.264 10:11:02 -- common/autotest_common.sh@931 -- # uname 00:15:49.264 10:11:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:49.264 10:11:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 192017 00:15:49.264 10:11:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:49.264 10:11:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:49.264 10:11:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 192017' 00:15:49.264 killing process with pid 192017 00:15:49.264 10:11:02 -- common/autotest_common.sh@945 -- # kill 192017 00:15:49.264 10:11:02 -- common/autotest_common.sh@950 -- # wait 192017 00:15:49.264 10:11:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:49.264 10:11:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:49.264 10:11:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:49.264 10:11:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.264 10:11:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:49.264 10:11:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.264 10:11:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.264 10:11:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.170 10:11:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:51.429 00:15:51.429 real 3m58.996s 00:15:51.429 user 15m18.161s 00:15:51.429 sys 0m19.931s 00:15:51.429 10:11:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.429 10:11:04 -- common/autotest_common.sh@10 -- # set +x 00:15:51.429 ************************************ 00:15:51.429 END TEST nvmf_connect_disconnect 00:15:51.429 ************************************ 00:15:51.429 10:11:04 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:51.429 10:11:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:51.429 10:11:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:51.429 10:11:04 -- common/autotest_common.sh@10 -- # set +x 00:15:51.429 ************************************ 00:15:51.429 START TEST nvmf_multitarget 00:15:51.429 ************************************ 00:15:51.429 10:11:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:51.429 * Looking for test storage... 00:15:51.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.429 10:11:04 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.429 10:11:04 -- nvmf/common.sh@7 -- # uname -s 00:15:51.429 10:11:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.429 10:11:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.429 10:11:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.429 10:11:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.429 10:11:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.429 10:11:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.429 10:11:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.429 10:11:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.429 10:11:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.429 10:11:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.429 10:11:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.429 10:11:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.429 10:11:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.429 10:11:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.429 10:11:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.429 10:11:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.429 10:11:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.429 10:11:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.429 10:11:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.429 10:11:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.430 10:11:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.430 10:11:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.430 10:11:04 -- paths/export.sh@5 -- # export PATH 00:15:51.430 10:11:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.430 10:11:04 -- nvmf/common.sh@46 -- # : 0 00:15:51.430 10:11:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:51.430 10:11:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:51.430 10:11:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:51.430 10:11:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.430 10:11:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.430 10:11:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:51.430 10:11:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:51.430 10:11:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:51.430 10:11:04 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:51.430 10:11:04 -- target/multitarget.sh@15 -- # nvmftestinit 00:15:51.430 10:11:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:51.430 10:11:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.430 10:11:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:51.430 10:11:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:51.430 10:11:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:51.430 10:11:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.430 10:11:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.430 10:11:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.430 10:11:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:51.430 10:11:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:51.430 10:11:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:51.430 10:11:04 -- common/autotest_common.sh@10 -- # set +x 00:15:56.702 10:11:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:56.702 10:11:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:56.702 10:11:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:56.702 10:11:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:56.702 10:11:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:56.702 10:11:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:56.702 10:11:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:56.702 10:11:09 -- nvmf/common.sh@294 -- # net_devs=() 00:15:56.702 10:11:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:56.702 10:11:09 -- nvmf/common.sh@295 -- # e810=() 00:15:56.702 10:11:09 -- nvmf/common.sh@295 -- # local -ga e810 00:15:56.702 10:11:09 -- nvmf/common.sh@296 -- # x722=() 00:15:56.702 10:11:09 -- nvmf/common.sh@296 -- # local -ga x722 00:15:56.702 10:11:09 -- nvmf/common.sh@297 -- # mlx=() 00:15:56.702 10:11:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:56.702 10:11:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.702 10:11:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:56.702 10:11:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:56.702 10:11:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:56.702 10:11:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:56.702 10:11:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:56.702 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:56.702 10:11:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:56.702 10:11:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:56.702 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:56.702 10:11:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:56.702 10:11:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:56.702 10:11:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.702 10:11:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:56.702 10:11:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.702 10:11:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:56.702 Found net devices under 0000:86:00.0: cvl_0_0 00:15:56.702 10:11:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.702 10:11:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:56.702 10:11:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.702 10:11:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:56.702 10:11:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.702 10:11:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:56.702 Found net devices under 0000:86:00.1: cvl_0_1 00:15:56.702 10:11:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.702 10:11:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:56.702 10:11:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:56.702 10:11:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:56.702 10:11:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:56.702 10:11:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.702 10:11:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.702 10:11:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.702 10:11:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:56.702 10:11:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.702 10:11:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.702 10:11:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:56.702 10:11:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.702 10:11:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.702 10:11:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:56.702 10:11:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:56.702 10:11:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.702 10:11:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.702 10:11:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.702 10:11:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.702 10:11:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:56.702 10:11:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.702 10:11:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.702 10:11:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.702 10:11:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:56.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:15:56.702 00:15:56.702 --- 10.0.0.2 ping statistics --- 00:15:56.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.703 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:15:56.703 10:11:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:15:56.703 00:15:56.703 --- 10.0.0.1 ping statistics --- 00:15:56.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.703 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:15:56.703 10:11:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.703 10:11:09 -- nvmf/common.sh@410 -- # return 0 00:15:56.703 10:11:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:56.703 10:11:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.703 10:11:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:56.703 10:11:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:56.703 10:11:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.703 10:11:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:56.703 10:11:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:56.703 10:11:09 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:56.703 10:11:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:56.703 10:11:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:56.703 10:11:09 -- common/autotest_common.sh@10 -- # set +x 00:15:56.703 10:11:09 -- nvmf/common.sh@469 -- # nvmfpid=236226 00:15:56.703 10:11:09 -- nvmf/common.sh@470 -- # waitforlisten 236226 00:15:56.703 10:11:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.703 10:11:09 -- common/autotest_common.sh@819 -- # '[' -z 236226 ']' 00:15:56.703 10:11:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.703 10:11:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.703 10:11:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.703 10:11:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.703 10:11:09 -- common/autotest_common.sh@10 -- # set +x 00:15:56.703 [2024-04-24 10:11:09.915916] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:15:56.703 [2024-04-24 10:11:09.915961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.703 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.703 [2024-04-24 10:11:09.972928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.961 [2024-04-24 10:11:10.060503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:56.961 [2024-04-24 10:11:10.060610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.961 [2024-04-24 10:11:10.060618] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.961 [2024-04-24 10:11:10.060624] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.961 [2024-04-24 10:11:10.060668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.961 [2024-04-24 10:11:10.060686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.961 [2024-04-24 10:11:10.060772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.961 [2024-04-24 10:11:10.060773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.528 10:11:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:57.528 10:11:10 -- common/autotest_common.sh@852 -- # return 0 00:15:57.528 10:11:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:57.528 10:11:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:57.528 10:11:10 -- common/autotest_common.sh@10 -- # set +x 00:15:57.528 10:11:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.528 10:11:10 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:57.528 10:11:10 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:57.528 10:11:10 -- target/multitarget.sh@21 -- # jq length 00:15:57.786 10:11:10 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:57.786 10:11:10 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:57.786 "nvmf_tgt_1" 00:15:57.786 10:11:10 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:57.786 "nvmf_tgt_2" 00:15:58.044 10:11:11 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:58.044 10:11:11 -- target/multitarget.sh@28 -- # jq length 00:15:58.044 10:11:11 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:58.044 10:11:11 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:58.044 true 00:15:58.044 10:11:11 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:58.302 true 00:15:58.302 10:11:11 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:58.302 10:11:11 -- target/multitarget.sh@35 -- # jq length 00:15:58.302 10:11:11 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:58.302 10:11:11 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:58.302 10:11:11 -- target/multitarget.sh@41 -- # nvmftestfini 00:15:58.302 10:11:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:58.302 10:11:11 -- nvmf/common.sh@116 -- # sync 00:15:58.302 10:11:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:58.302 10:11:11 -- nvmf/common.sh@119 -- # set +e 00:15:58.302 10:11:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:58.302 10:11:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:58.302 rmmod nvme_tcp 00:15:58.302 rmmod nvme_fabrics 00:15:58.302 rmmod nvme_keyring 00:15:58.302 10:11:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:58.302 10:11:11 -- nvmf/common.sh@123 -- # set -e 00:15:58.302 10:11:11 -- nvmf/common.sh@124 -- # return 0 00:15:58.302 10:11:11 -- nvmf/common.sh@477 -- # '[' -n 236226 ']' 00:15:58.302 10:11:11 -- nvmf/common.sh@478 -- # killprocess 236226 00:15:58.302 10:11:11 -- common/autotest_common.sh@926 -- # '[' -z 236226 ']' 00:15:58.302 10:11:11 -- common/autotest_common.sh@930 -- # kill -0 236226 00:15:58.302 10:11:11 -- common/autotest_common.sh@931 -- # uname 00:15:58.302 10:11:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:58.302 10:11:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 236226 00:15:58.302 10:11:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:58.302 10:11:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:58.302 10:11:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 236226' 00:15:58.302 killing process with pid 236226 00:15:58.302 10:11:11 -- common/autotest_common.sh@945 -- # kill 236226 00:15:58.302 10:11:11 -- common/autotest_common.sh@950 -- # wait 236226 00:15:58.561 10:11:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:58.561 10:11:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:58.561 10:11:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:58.561 10:11:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.561 10:11:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:58.561 10:11:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.561 10:11:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.561 10:11:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.157 10:11:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:01.157 00:16:01.157 real 0m9.357s 00:16:01.157 user 0m9.024s 00:16:01.157 sys 0m4.404s 00:16:01.157 10:11:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.157 10:11:13 -- common/autotest_common.sh@10 -- # set +x 00:16:01.157 ************************************ 00:16:01.157 END TEST nvmf_multitarget 00:16:01.157 ************************************ 00:16:01.157 10:11:13 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:01.157 10:11:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:01.157 10:11:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:01.157 10:11:13 -- common/autotest_common.sh@10 -- # set +x 00:16:01.157 ************************************ 00:16:01.157 START TEST nvmf_rpc 00:16:01.157 ************************************ 00:16:01.157 10:11:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:01.157 * Looking for test storage... 00:16:01.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.157 10:11:13 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.157 10:11:13 -- nvmf/common.sh@7 -- # uname -s 00:16:01.157 10:11:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.157 10:11:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.157 10:11:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.157 10:11:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.157 10:11:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.157 10:11:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.157 10:11:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.157 10:11:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.157 10:11:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.157 10:11:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.157 10:11:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.157 10:11:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.157 10:11:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.157 10:11:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.157 10:11:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.157 10:11:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.157 10:11:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.157 10:11:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.157 10:11:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.157 10:11:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.157 10:11:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.157 10:11:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.157 10:11:13 -- paths/export.sh@5 -- # export PATH 00:16:01.157 10:11:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.157 10:11:13 -- nvmf/common.sh@46 -- # : 0 00:16:01.157 10:11:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:01.157 10:11:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:01.157 10:11:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:01.157 10:11:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.157 10:11:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.157 10:11:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:01.157 10:11:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:01.157 10:11:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:01.157 10:11:13 -- target/rpc.sh@11 -- # loops=5 00:16:01.157 10:11:13 -- target/rpc.sh@23 -- # nvmftestinit 00:16:01.157 10:11:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:01.157 10:11:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.157 10:11:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:01.157 10:11:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:01.157 10:11:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:01.157 10:11:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.157 10:11:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.157 10:11:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.157 10:11:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:01.157 10:11:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:01.157 10:11:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:01.157 10:11:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.430 10:11:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:06.430 10:11:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:06.430 10:11:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:06.430 10:11:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:06.430 10:11:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:06.430 10:11:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:06.430 10:11:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:06.430 10:11:19 -- nvmf/common.sh@294 -- # net_devs=() 00:16:06.430 10:11:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:06.430 10:11:19 -- nvmf/common.sh@295 -- # e810=() 00:16:06.430 10:11:19 -- nvmf/common.sh@295 -- # local -ga e810 00:16:06.430 10:11:19 -- nvmf/common.sh@296 -- # x722=() 00:16:06.430 10:11:19 -- nvmf/common.sh@296 -- # local -ga x722 00:16:06.430 10:11:19 -- nvmf/common.sh@297 -- # mlx=() 00:16:06.430 10:11:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:06.430 10:11:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.430 10:11:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:06.430 10:11:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:06.430 10:11:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:06.430 10:11:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:06.430 10:11:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:06.431 10:11:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:06.431 10:11:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:06.431 10:11:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:06.431 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:06.431 10:11:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:06.431 10:11:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:06.431 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:06.431 10:11:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:06.431 10:11:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:06.431 10:11:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.431 10:11:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:06.431 10:11:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.431 10:11:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:06.431 Found net devices under 0000:86:00.0: cvl_0_0 00:16:06.431 10:11:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.431 10:11:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:06.431 10:11:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.431 10:11:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:06.431 10:11:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.431 10:11:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:06.431 Found net devices under 0000:86:00.1: cvl_0_1 00:16:06.431 10:11:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.431 10:11:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:06.431 10:11:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:06.431 10:11:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:06.431 10:11:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.431 10:11:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.431 10:11:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.431 10:11:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:06.431 10:11:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.431 10:11:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.431 10:11:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:06.431 10:11:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.431 10:11:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.431 10:11:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:06.431 10:11:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:06.431 10:11:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.431 10:11:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.431 10:11:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.431 10:11:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.431 10:11:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:06.431 10:11:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.431 10:11:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.431 10:11:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.431 10:11:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:06.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:16:06.431 00:16:06.431 --- 10.0.0.2 ping statistics --- 00:16:06.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.431 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:16:06.431 10:11:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:16:06.431 00:16:06.431 --- 10.0.0.1 ping statistics --- 00:16:06.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.431 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:16:06.431 10:11:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.431 10:11:19 -- nvmf/common.sh@410 -- # return 0 00:16:06.431 10:11:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:06.431 10:11:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.431 10:11:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:06.431 10:11:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.431 10:11:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:06.431 10:11:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:06.431 10:11:19 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:06.431 10:11:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:06.431 10:11:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:06.431 10:11:19 -- common/autotest_common.sh@10 -- # set +x 00:16:06.431 10:11:19 -- nvmf/common.sh@469 -- # nvmfpid=240042 00:16:06.431 10:11:19 -- nvmf/common.sh@470 -- # waitforlisten 240042 00:16:06.431 10:11:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:06.431 10:11:19 -- common/autotest_common.sh@819 -- # '[' -z 240042 ']' 00:16:06.431 10:11:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.431 10:11:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:06.431 10:11:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.431 10:11:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:06.431 10:11:19 -- common/autotest_common.sh@10 -- # set +x 00:16:06.431 [2024-04-24 10:11:19.462214] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:16:06.431 [2024-04-24 10:11:19.462253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.431 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.431 [2024-04-24 10:11:19.519862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.431 [2024-04-24 10:11:19.595455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:06.431 [2024-04-24 10:11:19.595569] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.431 [2024-04-24 10:11:19.595576] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.431 [2024-04-24 10:11:19.595583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.431 [2024-04-24 10:11:19.595625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.431 [2024-04-24 10:11:19.595645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.431 [2024-04-24 10:11:19.595728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.431 [2024-04-24 10:11:19.595730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.015 10:11:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:07.015 10:11:20 -- common/autotest_common.sh@852 -- # return 0 00:16:07.015 10:11:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:07.015 10:11:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:07.015 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.282 10:11:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.282 10:11:20 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:07.282 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.282 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.282 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.282 10:11:20 -- target/rpc.sh@26 -- # stats='{ 00:16:07.282 "tick_rate": 2300000000, 00:16:07.282 "poll_groups": [ 00:16:07.282 { 00:16:07.282 "name": "nvmf_tgt_poll_group_0", 00:16:07.282 "admin_qpairs": 0, 00:16:07.282 "io_qpairs": 0, 00:16:07.282 "current_admin_qpairs": 0, 00:16:07.282 "current_io_qpairs": 0, 00:16:07.282 "pending_bdev_io": 0, 00:16:07.282 "completed_nvme_io": 0, 00:16:07.282 "transports": [] 00:16:07.282 }, 00:16:07.282 { 00:16:07.282 "name": "nvmf_tgt_poll_group_1", 00:16:07.282 "admin_qpairs": 0, 00:16:07.282 "io_qpairs": 0, 00:16:07.282 "current_admin_qpairs": 0, 00:16:07.282 "current_io_qpairs": 0, 00:16:07.282 "pending_bdev_io": 0, 00:16:07.282 "completed_nvme_io": 0, 00:16:07.282 "transports": [] 00:16:07.282 }, 00:16:07.282 { 00:16:07.282 "name": "nvmf_tgt_poll_group_2", 00:16:07.282 "admin_qpairs": 0, 00:16:07.282 "io_qpairs": 0, 00:16:07.282 "current_admin_qpairs": 0, 00:16:07.282 "current_io_qpairs": 0, 00:16:07.282 "pending_bdev_io": 0, 00:16:07.282 "completed_nvme_io": 0, 00:16:07.282 "transports": [] 00:16:07.282 }, 00:16:07.282 { 00:16:07.282 "name": "nvmf_tgt_poll_group_3", 00:16:07.282 "admin_qpairs": 0, 00:16:07.282 "io_qpairs": 0, 00:16:07.282 "current_admin_qpairs": 0, 00:16:07.282 "current_io_qpairs": 0, 00:16:07.282 "pending_bdev_io": 0, 00:16:07.282 "completed_nvme_io": 0, 00:16:07.282 "transports": [] 00:16:07.282 } 00:16:07.282 ] 00:16:07.282 }' 00:16:07.282 10:11:20 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:07.282 10:11:20 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:07.282 10:11:20 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:07.282 10:11:20 -- target/rpc.sh@15 -- # wc -l 00:16:07.282 10:11:20 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:07.282 10:11:20 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:07.282 10:11:20 -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:07.282 10:11:20 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:07.282 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.282 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.282 [2024-04-24 10:11:20.413755] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.282 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.282 10:11:20 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:07.282 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.282 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.282 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.282 10:11:20 -- target/rpc.sh@33 -- # stats='{ 00:16:07.282 "tick_rate": 2300000000, 00:16:07.282 "poll_groups": [ 00:16:07.282 { 00:16:07.282 "name": "nvmf_tgt_poll_group_0", 00:16:07.282 "admin_qpairs": 0, 00:16:07.282 "io_qpairs": 0, 00:16:07.282 "current_admin_qpairs": 0, 00:16:07.282 "current_io_qpairs": 0, 00:16:07.282 "pending_bdev_io": 0, 00:16:07.282 "completed_nvme_io": 0, 00:16:07.282 "transports": [ 00:16:07.282 { 00:16:07.282 "trtype": "TCP" 00:16:07.282 } 00:16:07.282 ] 00:16:07.282 }, 00:16:07.282 { 00:16:07.282 "name": "nvmf_tgt_poll_group_1", 00:16:07.282 "admin_qpairs": 0, 00:16:07.282 "io_qpairs": 0, 00:16:07.282 "current_admin_qpairs": 0, 00:16:07.282 "current_io_qpairs": 0, 00:16:07.282 "pending_bdev_io": 0, 00:16:07.282 "completed_nvme_io": 0, 00:16:07.282 "transports": [ 00:16:07.282 { 00:16:07.282 "trtype": "TCP" 00:16:07.282 } 00:16:07.282 ] 00:16:07.282 }, 00:16:07.282 { 00:16:07.282 "name": "nvmf_tgt_poll_group_2", 00:16:07.282 "admin_qpairs": 0, 00:16:07.282 "io_qpairs": 0, 00:16:07.282 "current_admin_qpairs": 0, 00:16:07.282 "current_io_qpairs": 0, 00:16:07.282 "pending_bdev_io": 0, 00:16:07.282 "completed_nvme_io": 0, 00:16:07.282 "transports": [ 00:16:07.282 { 00:16:07.282 "trtype": "TCP" 00:16:07.282 } 00:16:07.282 ] 00:16:07.282 }, 00:16:07.282 { 00:16:07.282 "name": "nvmf_tgt_poll_group_3", 00:16:07.282 "admin_qpairs": 0, 00:16:07.282 "io_qpairs": 0, 00:16:07.282 "current_admin_qpairs": 0, 00:16:07.282 "current_io_qpairs": 0, 00:16:07.282 "pending_bdev_io": 0, 00:16:07.282 "completed_nvme_io": 0, 00:16:07.282 "transports": [ 00:16:07.282 { 00:16:07.282 "trtype": "TCP" 00:16:07.282 } 00:16:07.282 ] 00:16:07.282 } 00:16:07.282 ] 00:16:07.282 }' 00:16:07.282 10:11:20 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:07.282 10:11:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:07.282 10:11:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:07.283 10:11:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:07.283 10:11:20 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:07.283 10:11:20 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:07.283 10:11:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:07.283 10:11:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:07.283 10:11:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:07.283 10:11:20 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:07.283 10:11:20 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:07.283 10:11:20 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:07.283 10:11:20 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:07.283 10:11:20 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:07.283 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.283 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.283 Malloc1 00:16:07.283 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.283 10:11:20 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:07.283 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.283 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.540 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.540 10:11:20 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.540 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.540 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.540 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.540 10:11:20 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:07.540 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.540 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.540 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.540 10:11:20 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.540 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.540 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.540 [2024-04-24 10:11:20.585969] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.540 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.540 10:11:20 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:07.540 10:11:20 -- common/autotest_common.sh@640 -- # local es=0 00:16:07.540 10:11:20 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:07.540 10:11:20 -- common/autotest_common.sh@628 -- # local arg=nvme 00:16:07.540 10:11:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.540 10:11:20 -- common/autotest_common.sh@632 -- # type -t nvme 00:16:07.540 10:11:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.540 10:11:20 -- common/autotest_common.sh@634 -- # type -P nvme 00:16:07.540 10:11:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.540 10:11:20 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:16:07.540 10:11:20 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:16:07.540 10:11:20 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:07.540 [2024-04-24 10:11:20.610598] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:16:07.540 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:07.540 could not add new controller: failed to write to nvme-fabrics device 00:16:07.540 10:11:20 -- common/autotest_common.sh@643 -- # es=1 00:16:07.540 10:11:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:07.540 10:11:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:07.540 10:11:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:07.540 10:11:20 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.540 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.540 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.540 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.540 10:11:20 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.475 10:11:21 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:08.475 10:11:21 -- common/autotest_common.sh@1177 -- # local i=0 00:16:08.475 10:11:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.475 10:11:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:08.475 10:11:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:11.003 10:11:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:11.003 10:11:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:11.003 10:11:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.003 10:11:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:11.003 10:11:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.003 10:11:23 -- common/autotest_common.sh@1187 -- # return 0 00:16:11.003 10:11:23 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.003 10:11:23 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.003 10:11:23 -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.003 10:11:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:11.003 10:11:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.003 10:11:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:11.003 10:11:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.003 10:11:23 -- common/autotest_common.sh@1210 -- # return 0 00:16:11.003 10:11:23 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.003 10:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.003 10:11:23 -- common/autotest_common.sh@10 -- # set +x 00:16:11.003 10:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.003 10:11:23 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.003 10:11:23 -- common/autotest_common.sh@640 -- # local es=0 00:16:11.003 10:11:23 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.003 10:11:23 -- common/autotest_common.sh@628 -- # local arg=nvme 00:16:11.003 10:11:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.003 10:11:23 -- common/autotest_common.sh@632 -- # type -t nvme 00:16:11.003 10:11:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.003 10:11:23 -- common/autotest_common.sh@634 -- # type -P nvme 00:16:11.003 10:11:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:11.003 10:11:23 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:16:11.003 10:11:23 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:16:11.003 10:11:23 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.003 [2024-04-24 10:11:23.924958] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:16:11.003 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:11.003 could not add new controller: failed to write to nvme-fabrics device 00:16:11.003 10:11:23 -- common/autotest_common.sh@643 -- # es=1 00:16:11.003 10:11:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.003 10:11:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.003 10:11:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.003 10:11:23 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:11.003 10:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.003 10:11:23 -- common/autotest_common.sh@10 -- # set +x 00:16:11.003 10:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.003 10:11:23 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.937 10:11:25 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.937 10:11:25 -- common/autotest_common.sh@1177 -- # local i=0 00:16:11.937 10:11:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.937 10:11:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:11.937 10:11:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:13.838 10:11:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:13.838 10:11:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:13.838 10:11:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.838 10:11:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:13.838 10:11:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.838 10:11:27 -- common/autotest_common.sh@1187 -- # return 0 00:16:13.838 10:11:27 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.096 10:11:27 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.096 10:11:27 -- common/autotest_common.sh@1198 -- # local i=0 00:16:14.096 10:11:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:14.096 10:11:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.096 10:11:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:14.096 10:11:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.096 10:11:27 -- common/autotest_common.sh@1210 -- # return 0 00:16:14.096 10:11:27 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.096 10:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.096 10:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:14.096 10:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.096 10:11:27 -- target/rpc.sh@81 -- # seq 1 5 00:16:14.096 10:11:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:14.096 10:11:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.096 10:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.096 10:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:14.096 10:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.096 10:11:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.096 10:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.096 10:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:14.097 [2024-04-24 10:11:27.197909] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.097 10:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.097 10:11:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:14.097 10:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.097 10:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:14.097 10:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.097 10:11:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.097 10:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.097 10:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:14.097 10:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.097 10:11:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.032 10:11:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.032 10:11:28 -- common/autotest_common.sh@1177 -- # local i=0 00:16:15.032 10:11:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.032 10:11:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:15.032 10:11:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:17.567 10:11:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:17.567 10:11:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:17.567 10:11:30 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.567 10:11:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:17.567 10:11:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.567 10:11:30 -- common/autotest_common.sh@1187 -- # return 0 00:16:17.567 10:11:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.568 10:11:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:17.568 10:11:30 -- common/autotest_common.sh@1198 -- # local i=0 00:16:17.568 10:11:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:17.568 10:11:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.568 10:11:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:17.568 10:11:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.568 10:11:30 -- common/autotest_common.sh@1210 -- # return 0 00:16:17.568 10:11:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:17.568 10:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.568 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:17.568 10:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.568 10:11:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.568 10:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.568 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:17.568 10:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.568 10:11:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:17.568 10:11:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:17.568 10:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.568 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:17.568 10:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.568 10:11:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.568 10:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.568 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:17.568 [2024-04-24 10:11:30.506505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.568 10:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.568 10:11:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:17.568 10:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.568 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:17.568 10:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.568 10:11:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:17.568 10:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.568 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:17.568 10:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.568 10:11:30 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.503 10:11:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.503 10:11:31 -- common/autotest_common.sh@1177 -- # local i=0 00:16:18.503 10:11:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.503 10:11:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:18.503 10:11:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:21.036 10:11:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:21.036 10:11:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:21.036 10:11:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.036 10:11:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:21.036 10:11:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.036 10:11:33 -- common/autotest_common.sh@1187 -- # return 0 00:16:21.036 10:11:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.036 10:11:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:21.036 10:11:33 -- common/autotest_common.sh@1198 -- # local i=0 00:16:21.036 10:11:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:21.036 10:11:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.036 10:11:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:21.036 10:11:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.036 10:11:33 -- common/autotest_common.sh@1210 -- # return 0 00:16:21.036 10:11:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:21.036 10:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.036 10:11:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.036 10:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.036 10:11:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.036 10:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.036 10:11:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.036 10:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.036 10:11:33 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:21.036 10:11:33 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:21.036 10:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.036 10:11:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.036 10:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.036 10:11:33 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.036 10:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.036 10:11:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.036 [2024-04-24 10:11:33.849844] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.036 10:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.036 10:11:33 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:21.036 10:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.036 10:11:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.036 10:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.036 10:11:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:21.036 10:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.036 10:11:33 -- common/autotest_common.sh@10 -- # set +x 00:16:21.036 10:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.036 10:11:33 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:21.969 10:11:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.969 10:11:34 -- common/autotest_common.sh@1177 -- # local i=0 00:16:21.969 10:11:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.969 10:11:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:21.969 10:11:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:23.870 10:11:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:23.870 10:11:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:23.870 10:11:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.870 10:11:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:23.870 10:11:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.870 10:11:36 -- common/autotest_common.sh@1187 -- # return 0 00:16:23.870 10:11:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:23.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.870 10:11:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:23.870 10:11:37 -- common/autotest_common.sh@1198 -- # local i=0 00:16:23.870 10:11:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:23.870 10:11:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.870 10:11:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:23.870 10:11:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.870 10:11:37 -- common/autotest_common.sh@1210 -- # return 0 00:16:23.870 10:11:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:23.870 10:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.870 10:11:37 -- common/autotest_common.sh@10 -- # set +x 00:16:23.870 10:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.870 10:11:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.870 10:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.870 10:11:37 -- common/autotest_common.sh@10 -- # set +x 00:16:23.870 10:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.870 10:11:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:23.870 10:11:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:23.870 10:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.870 10:11:37 -- common/autotest_common.sh@10 -- # set +x 00:16:23.870 10:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.870 10:11:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.870 10:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.870 10:11:37 -- common/autotest_common.sh@10 -- # set +x 00:16:23.871 [2024-04-24 10:11:37.142113] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.871 10:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.871 10:11:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:23.871 10:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.871 10:11:37 -- common/autotest_common.sh@10 -- # set +x 00:16:24.129 10:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.129 10:11:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.129 10:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.129 10:11:37 -- common/autotest_common.sh@10 -- # set +x 00:16:24.129 10:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.129 10:11:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.064 10:11:38 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.064 10:11:38 -- common/autotest_common.sh@1177 -- # local i=0 00:16:25.064 10:11:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.064 10:11:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:25.064 10:11:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:27.600 10:11:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:27.600 10:11:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:27.600 10:11:40 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.600 10:11:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:27.600 10:11:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.600 10:11:40 -- common/autotest_common.sh@1187 -- # return 0 00:16:27.600 10:11:40 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.600 10:11:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.600 10:11:40 -- common/autotest_common.sh@1198 -- # local i=0 00:16:27.600 10:11:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:27.600 10:11:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.600 10:11:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:27.600 10:11:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.600 10:11:40 -- common/autotest_common.sh@1210 -- # return 0 00:16:27.600 10:11:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:27.600 10:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.600 10:11:40 -- common/autotest_common.sh@10 -- # set +x 00:16:27.600 10:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.600 10:11:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.600 10:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.600 10:11:40 -- common/autotest_common.sh@10 -- # set +x 00:16:27.600 10:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.600 10:11:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:27.600 10:11:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.600 10:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.600 10:11:40 -- common/autotest_common.sh@10 -- # set +x 00:16:27.600 10:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.600 10:11:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.600 10:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.600 10:11:40 -- common/autotest_common.sh@10 -- # set +x 00:16:27.600 [2024-04-24 10:11:40.580656] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.600 10:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.600 10:11:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:27.600 10:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.600 10:11:40 -- common/autotest_common.sh@10 -- # set +x 00:16:27.600 10:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.601 10:11:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.601 10:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.601 10:11:40 -- common/autotest_common.sh@10 -- # set +x 00:16:27.601 10:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.601 10:11:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.537 10:11:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.537 10:11:41 -- common/autotest_common.sh@1177 -- # local i=0 00:16:28.537 10:11:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.537 10:11:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:28.537 10:11:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:31.069 10:11:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:31.069 10:11:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:31.069 10:11:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.069 10:11:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:31.069 10:11:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.069 10:11:43 -- common/autotest_common.sh@1187 -- # return 0 00:16:31.069 10:11:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.069 10:11:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.069 10:11:43 -- common/autotest_common.sh@1198 -- # local i=0 00:16:31.069 10:11:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:31.069 10:11:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.069 10:11:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:31.069 10:11:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.069 10:11:43 -- common/autotest_common.sh@1210 -- # return 0 00:16:31.069 10:11:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:31.069 10:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.069 10:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:43 -- target/rpc.sh@99 -- # seq 1 5 00:16:31.069 10:11:43 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.069 10:11:43 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.069 10:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:43 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.069 10:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 [2024-04-24 10:11:43.970347] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.069 10:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:43 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.069 10:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:43 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.069 10:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:43 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.069 10:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:43 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.069 10:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.069 10:11:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 [2024-04-24 10:11:44.018444] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.069 10:11:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 [2024-04-24 10:11:44.066594] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.069 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.069 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.069 10:11:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.069 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.070 10:11:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 [2024-04-24 10:11:44.118784] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.070 10:11:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 [2024-04-24 10:11:44.166966] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:31.070 10:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.070 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.070 10:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.070 10:11:44 -- target/rpc.sh@110 -- # stats='{ 00:16:31.070 "tick_rate": 2300000000, 00:16:31.070 "poll_groups": [ 00:16:31.070 { 00:16:31.070 "name": "nvmf_tgt_poll_group_0", 00:16:31.070 "admin_qpairs": 2, 00:16:31.070 "io_qpairs": 168, 00:16:31.070 "current_admin_qpairs": 0, 00:16:31.070 "current_io_qpairs": 0, 00:16:31.070 "pending_bdev_io": 0, 00:16:31.070 "completed_nvme_io": 316, 00:16:31.070 "transports": [ 00:16:31.070 { 00:16:31.070 "trtype": "TCP" 00:16:31.070 } 00:16:31.070 ] 00:16:31.070 }, 00:16:31.070 { 00:16:31.070 "name": "nvmf_tgt_poll_group_1", 00:16:31.070 "admin_qpairs": 2, 00:16:31.070 "io_qpairs": 168, 00:16:31.070 "current_admin_qpairs": 0, 00:16:31.070 "current_io_qpairs": 0, 00:16:31.070 "pending_bdev_io": 0, 00:16:31.070 "completed_nvme_io": 219, 00:16:31.070 "transports": [ 00:16:31.070 { 00:16:31.070 "trtype": "TCP" 00:16:31.070 } 00:16:31.070 ] 00:16:31.070 }, 00:16:31.070 { 00:16:31.070 "name": "nvmf_tgt_poll_group_2", 00:16:31.070 "admin_qpairs": 1, 00:16:31.070 "io_qpairs": 168, 00:16:31.070 "current_admin_qpairs": 0, 00:16:31.070 "current_io_qpairs": 0, 00:16:31.070 "pending_bdev_io": 0, 00:16:31.070 "completed_nvme_io": 253, 00:16:31.070 "transports": [ 00:16:31.070 { 00:16:31.070 "trtype": "TCP" 00:16:31.070 } 00:16:31.070 ] 00:16:31.070 }, 00:16:31.070 { 00:16:31.070 "name": "nvmf_tgt_poll_group_3", 00:16:31.070 "admin_qpairs": 2, 00:16:31.070 "io_qpairs": 168, 00:16:31.070 "current_admin_qpairs": 0, 00:16:31.070 "current_io_qpairs": 0, 00:16:31.070 "pending_bdev_io": 0, 00:16:31.070 "completed_nvme_io": 234, 00:16:31.070 "transports": [ 00:16:31.070 { 00:16:31.070 "trtype": "TCP" 00:16:31.070 } 00:16:31.070 ] 00:16:31.070 } 00:16:31.070 ] 00:16:31.070 }' 00:16:31.070 10:11:44 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:31.070 10:11:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:31.070 10:11:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:31.070 10:11:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:31.070 10:11:44 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:31.070 10:11:44 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:31.070 10:11:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:31.070 10:11:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:31.070 10:11:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:31.070 10:11:44 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:31.070 10:11:44 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:31.070 10:11:44 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:31.070 10:11:44 -- target/rpc.sh@123 -- # nvmftestfini 00:16:31.070 10:11:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:31.070 10:11:44 -- nvmf/common.sh@116 -- # sync 00:16:31.070 10:11:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:31.070 10:11:44 -- nvmf/common.sh@119 -- # set +e 00:16:31.070 10:11:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:31.070 10:11:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:31.070 rmmod nvme_tcp 00:16:31.070 rmmod nvme_fabrics 00:16:31.328 rmmod nvme_keyring 00:16:31.328 10:11:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:31.328 10:11:44 -- nvmf/common.sh@123 -- # set -e 00:16:31.328 10:11:44 -- nvmf/common.sh@124 -- # return 0 00:16:31.328 10:11:44 -- nvmf/common.sh@477 -- # '[' -n 240042 ']' 00:16:31.328 10:11:44 -- nvmf/common.sh@478 -- # killprocess 240042 00:16:31.328 10:11:44 -- common/autotest_common.sh@926 -- # '[' -z 240042 ']' 00:16:31.328 10:11:44 -- common/autotest_common.sh@930 -- # kill -0 240042 00:16:31.328 10:11:44 -- common/autotest_common.sh@931 -- # uname 00:16:31.328 10:11:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:31.328 10:11:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 240042 00:16:31.328 10:11:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:31.328 10:11:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:31.328 10:11:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 240042' 00:16:31.328 killing process with pid 240042 00:16:31.328 10:11:44 -- common/autotest_common.sh@945 -- # kill 240042 00:16:31.328 10:11:44 -- common/autotest_common.sh@950 -- # wait 240042 00:16:31.588 10:11:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:31.588 10:11:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:31.588 10:11:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:31.588 10:11:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.588 10:11:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:31.588 10:11:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.588 10:11:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.588 10:11:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.541 10:11:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:33.541 00:16:33.541 real 0m32.832s 00:16:33.541 user 1m41.367s 00:16:33.541 sys 0m5.866s 00:16:33.541 10:11:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.541 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:16:33.541 ************************************ 00:16:33.541 END TEST nvmf_rpc 00:16:33.541 ************************************ 00:16:33.541 10:11:46 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:33.541 10:11:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:33.541 10:11:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:33.541 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:16:33.541 ************************************ 00:16:33.541 START TEST nvmf_invalid 00:16:33.541 ************************************ 00:16:33.541 10:11:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:33.861 * Looking for test storage... 00:16:33.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.861 10:11:46 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.861 10:11:46 -- nvmf/common.sh@7 -- # uname -s 00:16:33.861 10:11:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.861 10:11:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.861 10:11:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.861 10:11:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.861 10:11:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.861 10:11:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.861 10:11:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.861 10:11:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.861 10:11:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.861 10:11:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.861 10:11:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.861 10:11:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.861 10:11:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.861 10:11:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.861 10:11:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.861 10:11:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.861 10:11:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.861 10:11:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.861 10:11:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.861 10:11:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.861 10:11:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.861 10:11:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.861 10:11:46 -- paths/export.sh@5 -- # export PATH 00:16:33.861 10:11:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.861 10:11:46 -- nvmf/common.sh@46 -- # : 0 00:16:33.861 10:11:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:33.861 10:11:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:33.861 10:11:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:33.861 10:11:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.861 10:11:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.861 10:11:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:33.862 10:11:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:33.862 10:11:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:33.862 10:11:46 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:33.862 10:11:46 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.862 10:11:46 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:33.862 10:11:46 -- target/invalid.sh@14 -- # target=foobar 00:16:33.862 10:11:46 -- target/invalid.sh@16 -- # RANDOM=0 00:16:33.862 10:11:46 -- target/invalid.sh@34 -- # nvmftestinit 00:16:33.862 10:11:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:33.862 10:11:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.862 10:11:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:33.862 10:11:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:33.862 10:11:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:33.862 10:11:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.862 10:11:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.862 10:11:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.862 10:11:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:33.862 10:11:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:33.862 10:11:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:33.862 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:16:39.144 10:11:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:39.144 10:11:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:39.144 10:11:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:39.144 10:11:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:39.144 10:11:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:39.144 10:11:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:39.144 10:11:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:39.144 10:11:52 -- nvmf/common.sh@294 -- # net_devs=() 00:16:39.144 10:11:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:39.144 10:11:52 -- nvmf/common.sh@295 -- # e810=() 00:16:39.144 10:11:52 -- nvmf/common.sh@295 -- # local -ga e810 00:16:39.144 10:11:52 -- nvmf/common.sh@296 -- # x722=() 00:16:39.144 10:11:52 -- nvmf/common.sh@296 -- # local -ga x722 00:16:39.144 10:11:52 -- nvmf/common.sh@297 -- # mlx=() 00:16:39.144 10:11:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:39.144 10:11:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.144 10:11:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:39.144 10:11:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:39.144 10:11:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:39.144 10:11:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:39.144 10:11:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:39.144 10:11:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:39.144 10:11:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.144 10:11:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:39.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:39.144 10:11:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.144 10:11:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.144 10:11:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.145 10:11:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:39.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:39.145 10:11:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:39.145 10:11:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.145 10:11:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.145 10:11:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.145 10:11:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.145 10:11:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:39.145 Found net devices under 0000:86:00.0: cvl_0_0 00:16:39.145 10:11:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.145 10:11:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.145 10:11:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.145 10:11:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.145 10:11:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.145 10:11:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:39.145 Found net devices under 0000:86:00.1: cvl_0_1 00:16:39.145 10:11:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.145 10:11:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:39.145 10:11:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:39.145 10:11:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:39.145 10:11:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.145 10:11:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.145 10:11:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.145 10:11:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:39.145 10:11:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.145 10:11:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.145 10:11:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:39.145 10:11:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.145 10:11:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.145 10:11:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:39.145 10:11:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:39.145 10:11:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.145 10:11:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.145 10:11:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.145 10:11:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.145 10:11:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:39.145 10:11:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.145 10:11:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.145 10:11:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.145 10:11:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:39.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:16:39.145 00:16:39.145 --- 10.0.0.2 ping statistics --- 00:16:39.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.145 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:39.145 10:11:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:16:39.145 00:16:39.145 --- 10.0.0.1 ping statistics --- 00:16:39.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.145 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:16:39.145 10:11:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.145 10:11:52 -- nvmf/common.sh@410 -- # return 0 00:16:39.145 10:11:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:39.145 10:11:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.145 10:11:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:39.145 10:11:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.145 10:11:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:39.145 10:11:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:39.403 10:11:52 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:39.403 10:11:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:39.403 10:11:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:39.403 10:11:52 -- common/autotest_common.sh@10 -- # set +x 00:16:39.403 10:11:52 -- nvmf/common.sh@469 -- # nvmfpid=247948 00:16:39.403 10:11:52 -- nvmf/common.sh@470 -- # waitforlisten 247948 00:16:39.403 10:11:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:39.403 10:11:52 -- common/autotest_common.sh@819 -- # '[' -z 247948 ']' 00:16:39.403 10:11:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.403 10:11:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:39.403 10:11:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.403 10:11:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:39.403 10:11:52 -- common/autotest_common.sh@10 -- # set +x 00:16:39.403 [2024-04-24 10:11:52.477155] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:16:39.403 [2024-04-24 10:11:52.477201] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.403 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.403 [2024-04-24 10:11:52.533560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.403 [2024-04-24 10:11:52.609637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:39.403 [2024-04-24 10:11:52.609745] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.403 [2024-04-24 10:11:52.609754] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.403 [2024-04-24 10:11:52.609761] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.403 [2024-04-24 10:11:52.609804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.403 [2024-04-24 10:11:52.609826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.403 [2024-04-24 10:11:52.609891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.403 [2024-04-24 10:11:52.609893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.338 10:11:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:40.338 10:11:53 -- common/autotest_common.sh@852 -- # return 0 00:16:40.338 10:11:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:40.338 10:11:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:40.338 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:16:40.338 10:11:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.338 10:11:53 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:40.338 10:11:53 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15881 00:16:40.338 [2024-04-24 10:11:53.464813] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:40.338 10:11:53 -- target/invalid.sh@40 -- # out='request: 00:16:40.338 { 00:16:40.339 "nqn": "nqn.2016-06.io.spdk:cnode15881", 00:16:40.339 "tgt_name": "foobar", 00:16:40.339 "method": "nvmf_create_subsystem", 00:16:40.339 "req_id": 1 00:16:40.339 } 00:16:40.339 Got JSON-RPC error response 00:16:40.339 response: 00:16:40.339 { 00:16:40.339 "code": -32603, 00:16:40.339 "message": "Unable to find target foobar" 00:16:40.339 }' 00:16:40.339 10:11:53 -- target/invalid.sh@41 -- # [[ request: 00:16:40.339 { 00:16:40.339 "nqn": "nqn.2016-06.io.spdk:cnode15881", 00:16:40.339 "tgt_name": "foobar", 00:16:40.339 "method": "nvmf_create_subsystem", 00:16:40.339 "req_id": 1 00:16:40.339 } 00:16:40.339 Got JSON-RPC error response 00:16:40.339 response: 00:16:40.339 { 00:16:40.339 "code": -32603, 00:16:40.339 "message": "Unable to find target foobar" 00:16:40.339 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:40.339 10:11:53 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:40.339 10:11:53 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17502 00:16:40.597 [2024-04-24 10:11:53.653492] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17502: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:40.597 10:11:53 -- target/invalid.sh@45 -- # out='request: 00:16:40.597 { 00:16:40.597 "nqn": "nqn.2016-06.io.spdk:cnode17502", 00:16:40.597 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:40.597 "method": "nvmf_create_subsystem", 00:16:40.597 "req_id": 1 00:16:40.597 } 00:16:40.597 Got JSON-RPC error response 00:16:40.597 response: 00:16:40.597 { 00:16:40.597 "code": -32602, 00:16:40.597 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:40.597 }' 00:16:40.597 10:11:53 -- target/invalid.sh@46 -- # [[ request: 00:16:40.597 { 00:16:40.597 "nqn": "nqn.2016-06.io.spdk:cnode17502", 00:16:40.597 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:40.597 "method": "nvmf_create_subsystem", 00:16:40.597 "req_id": 1 00:16:40.597 } 00:16:40.597 Got JSON-RPC error response 00:16:40.597 response: 00:16:40.597 { 00:16:40.597 "code": -32602, 00:16:40.597 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:40.597 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:40.597 10:11:53 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:40.597 10:11:53 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21425 00:16:40.597 [2024-04-24 10:11:53.842080] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21425: invalid model number 'SPDK_Controller' 00:16:40.597 10:11:53 -- target/invalid.sh@50 -- # out='request: 00:16:40.597 { 00:16:40.597 "nqn": "nqn.2016-06.io.spdk:cnode21425", 00:16:40.597 "model_number": "SPDK_Controller\u001f", 00:16:40.597 "method": "nvmf_create_subsystem", 00:16:40.597 "req_id": 1 00:16:40.597 } 00:16:40.597 Got JSON-RPC error response 00:16:40.597 response: 00:16:40.597 { 00:16:40.597 "code": -32602, 00:16:40.597 "message": "Invalid MN SPDK_Controller\u001f" 00:16:40.597 }' 00:16:40.597 10:11:53 -- target/invalid.sh@51 -- # [[ request: 00:16:40.597 { 00:16:40.597 "nqn": "nqn.2016-06.io.spdk:cnode21425", 00:16:40.597 "model_number": "SPDK_Controller\u001f", 00:16:40.597 "method": "nvmf_create_subsystem", 00:16:40.597 "req_id": 1 00:16:40.597 } 00:16:40.597 Got JSON-RPC error response 00:16:40.597 response: 00:16:40.597 { 00:16:40.597 "code": -32602, 00:16:40.597 "message": "Invalid MN SPDK_Controller\u001f" 00:16:40.597 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:40.597 10:11:53 -- target/invalid.sh@54 -- # gen_random_s 21 00:16:40.597 10:11:53 -- target/invalid.sh@19 -- # local length=21 ll 00:16:40.856 10:11:53 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:40.856 10:11:53 -- target/invalid.sh@21 -- # local chars 00:16:40.856 10:11:53 -- target/invalid.sh@22 -- # local string 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 77 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=M 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 38 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+='&' 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 118 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=v 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 48 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=0 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 103 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=g 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 66 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=B 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 62 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+='>' 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 125 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+='}' 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 78 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=N 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 36 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+='$' 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 107 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=k 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 54 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=6 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 80 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=P 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 67 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=C 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 106 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=j 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 117 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=u 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 85 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=U 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 41 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=')' 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 52 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=4 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 115 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # string+=s 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # printf %x 73 00:16:40.856 10:11:53 -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:40.856 10:11:54 -- target/invalid.sh@25 -- # string+=I 00:16:40.856 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.856 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.856 10:11:54 -- target/invalid.sh@28 -- # [[ M == \- ]] 00:16:40.856 10:11:54 -- target/invalid.sh@31 -- # echo 'M&v0gB>}N$k6PCjuU)4sI' 00:16:40.857 10:11:54 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'M&v0gB>}N$k6PCjuU)4sI' nqn.2016-06.io.spdk:cnode9419 00:16:41.116 [2024-04-24 10:11:54.155155] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9419: invalid serial number 'M&v0gB>}N$k6PCjuU)4sI' 00:16:41.116 10:11:54 -- target/invalid.sh@54 -- # out='request: 00:16:41.116 { 00:16:41.116 "nqn": "nqn.2016-06.io.spdk:cnode9419", 00:16:41.116 "serial_number": "M&v0gB>}N$k6PCjuU)4sI", 00:16:41.116 "method": "nvmf_create_subsystem", 00:16:41.116 "req_id": 1 00:16:41.116 } 00:16:41.116 Got JSON-RPC error response 00:16:41.116 response: 00:16:41.116 { 00:16:41.116 "code": -32602, 00:16:41.116 "message": "Invalid SN M&v0gB>}N$k6PCjuU)4sI" 00:16:41.116 }' 00:16:41.116 10:11:54 -- target/invalid.sh@55 -- # [[ request: 00:16:41.116 { 00:16:41.116 "nqn": "nqn.2016-06.io.spdk:cnode9419", 00:16:41.116 "serial_number": "M&v0gB>}N$k6PCjuU)4sI", 00:16:41.116 "method": "nvmf_create_subsystem", 00:16:41.116 "req_id": 1 00:16:41.116 } 00:16:41.116 Got JSON-RPC error response 00:16:41.116 response: 00:16:41.116 { 00:16:41.116 "code": -32602, 00:16:41.116 "message": "Invalid SN M&v0gB>}N$k6PCjuU)4sI" 00:16:41.116 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:41.116 10:11:54 -- target/invalid.sh@58 -- # gen_random_s 41 00:16:41.116 10:11:54 -- target/invalid.sh@19 -- # local length=41 ll 00:16:41.116 10:11:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:41.116 10:11:54 -- target/invalid.sh@21 -- # local chars 00:16:41.116 10:11:54 -- target/invalid.sh@22 -- # local string 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 49 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=1 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 63 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+='?' 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 124 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+='|' 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 111 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=o 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 125 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+='}' 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 125 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+='}' 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 47 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=/ 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 119 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=w 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 102 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=f 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 100 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=d 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 100 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=d 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 119 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=w 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 45 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=- 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 123 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+='{' 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 52 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=4 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 48 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=0 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 96 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+='`' 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 62 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+='>' 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 110 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=n 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 36 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+='$' 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 102 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=f 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 106 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # string+=j 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.116 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # printf %x 90 00:16:41.116 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+=Z 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 62 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+='>' 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 99 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+=c 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 43 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+=+ 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 101 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+=e 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 97 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+=a 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 84 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+=T 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 82 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+=R 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 36 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+='$' 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 126 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+='~' 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 107 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+=k 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 65 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # string+=A 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.117 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.117 10:11:54 -- target/invalid.sh@25 -- # printf %x 115 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # string+=s 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # printf %x 37 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # string+=% 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # printf %x 112 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # string+=p 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # printf %x 40 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # string+='(' 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # printf %x 35 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # string+='#' 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # printf %x 123 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # string+='{' 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # printf %x 59 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:41.376 10:11:54 -- target/invalid.sh@25 -- # string+=';' 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.376 10:11:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.376 10:11:54 -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:16:41.376 10:11:54 -- target/invalid.sh@31 -- # echo '1?|o}}/wfddw-{40`>n$fjZ>c+eaTR$~kAs%p(#{;' 00:16:41.376 10:11:54 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '1?|o}}/wfddw-{40`>n$fjZ>c+eaTR$~kAs%p(#{;' nqn.2016-06.io.spdk:cnode8312 00:16:41.376 [2024-04-24 10:11:54.580601] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8312: invalid model number '1?|o}}/wfddw-{40`>n$fjZ>c+eaTR$~kAs%p(#{;' 00:16:41.376 10:11:54 -- target/invalid.sh@58 -- # out='request: 00:16:41.376 { 00:16:41.376 "nqn": "nqn.2016-06.io.spdk:cnode8312", 00:16:41.376 "model_number": "1?|o}}/wfddw-{40`>n$fjZ>c+eaTR$~kAs%p(#{;", 00:16:41.376 "method": "nvmf_create_subsystem", 00:16:41.376 "req_id": 1 00:16:41.376 } 00:16:41.376 Got JSON-RPC error response 00:16:41.376 response: 00:16:41.376 { 00:16:41.376 "code": -32602, 00:16:41.376 "message": "Invalid MN 1?|o}}/wfddw-{40`>n$fjZ>c+eaTR$~kAs%p(#{;" 00:16:41.376 }' 00:16:41.376 10:11:54 -- target/invalid.sh@59 -- # [[ request: 00:16:41.376 { 00:16:41.376 "nqn": "nqn.2016-06.io.spdk:cnode8312", 00:16:41.376 "model_number": "1?|o}}/wfddw-{40`>n$fjZ>c+eaTR$~kAs%p(#{;", 00:16:41.376 "method": "nvmf_create_subsystem", 00:16:41.376 "req_id": 1 00:16:41.376 } 00:16:41.376 Got JSON-RPC error response 00:16:41.376 response: 00:16:41.376 { 00:16:41.376 "code": -32602, 00:16:41.376 "message": "Invalid MN 1?|o}}/wfddw-{40`>n$fjZ>c+eaTR$~kAs%p(#{;" 00:16:41.376 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:41.376 10:11:54 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:41.635 [2024-04-24 10:11:54.765327] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.635 10:11:54 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:41.893 10:11:54 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:41.893 10:11:54 -- target/invalid.sh@67 -- # echo '' 00:16:41.893 10:11:54 -- target/invalid.sh@67 -- # head -n 1 00:16:41.893 10:11:54 -- target/invalid.sh@67 -- # IP= 00:16:41.893 10:11:54 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:41.893 [2024-04-24 10:11:55.142609] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:41.894 10:11:55 -- target/invalid.sh@69 -- # out='request: 00:16:41.894 { 00:16:41.894 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:41.894 "listen_address": { 00:16:41.894 "trtype": "tcp", 00:16:41.894 "traddr": "", 00:16:41.894 "trsvcid": "4421" 00:16:41.894 }, 00:16:41.894 "method": "nvmf_subsystem_remove_listener", 00:16:41.894 "req_id": 1 00:16:41.894 } 00:16:41.894 Got JSON-RPC error response 00:16:41.894 response: 00:16:41.894 { 00:16:41.894 "code": -32602, 00:16:41.894 "message": "Invalid parameters" 00:16:41.894 }' 00:16:41.894 10:11:55 -- target/invalid.sh@70 -- # [[ request: 00:16:41.894 { 00:16:41.894 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:41.894 "listen_address": { 00:16:41.894 "trtype": "tcp", 00:16:41.894 "traddr": "", 00:16:41.894 "trsvcid": "4421" 00:16:41.894 }, 00:16:41.894 "method": "nvmf_subsystem_remove_listener", 00:16:41.894 "req_id": 1 00:16:41.894 } 00:16:41.894 Got JSON-RPC error response 00:16:41.894 response: 00:16:41.894 { 00:16:41.894 "code": -32602, 00:16:41.894 "message": "Invalid parameters" 00:16:41.894 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:41.894 10:11:55 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28747 -i 0 00:16:42.152 [2024-04-24 10:11:55.323201] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28747: invalid cntlid range [0-65519] 00:16:42.152 10:11:55 -- target/invalid.sh@73 -- # out='request: 00:16:42.152 { 00:16:42.152 "nqn": "nqn.2016-06.io.spdk:cnode28747", 00:16:42.152 "min_cntlid": 0, 00:16:42.152 "method": "nvmf_create_subsystem", 00:16:42.152 "req_id": 1 00:16:42.152 } 00:16:42.152 Got JSON-RPC error response 00:16:42.152 response: 00:16:42.152 { 00:16:42.152 "code": -32602, 00:16:42.152 "message": "Invalid cntlid range [0-65519]" 00:16:42.152 }' 00:16:42.152 10:11:55 -- target/invalid.sh@74 -- # [[ request: 00:16:42.152 { 00:16:42.152 "nqn": "nqn.2016-06.io.spdk:cnode28747", 00:16:42.152 "min_cntlid": 0, 00:16:42.152 "method": "nvmf_create_subsystem", 00:16:42.152 "req_id": 1 00:16:42.152 } 00:16:42.152 Got JSON-RPC error response 00:16:42.152 response: 00:16:42.152 { 00:16:42.152 "code": -32602, 00:16:42.152 "message": "Invalid cntlid range [0-65519]" 00:16:42.152 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:42.152 10:11:55 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24885 -i 65520 00:16:42.411 [2024-04-24 10:11:55.507826] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24885: invalid cntlid range [65520-65519] 00:16:42.411 10:11:55 -- target/invalid.sh@75 -- # out='request: 00:16:42.411 { 00:16:42.411 "nqn": "nqn.2016-06.io.spdk:cnode24885", 00:16:42.411 "min_cntlid": 65520, 00:16:42.411 "method": "nvmf_create_subsystem", 00:16:42.411 "req_id": 1 00:16:42.411 } 00:16:42.411 Got JSON-RPC error response 00:16:42.411 response: 00:16:42.411 { 00:16:42.411 "code": -32602, 00:16:42.411 "message": "Invalid cntlid range [65520-65519]" 00:16:42.411 }' 00:16:42.411 10:11:55 -- target/invalid.sh@76 -- # [[ request: 00:16:42.411 { 00:16:42.411 "nqn": "nqn.2016-06.io.spdk:cnode24885", 00:16:42.411 "min_cntlid": 65520, 00:16:42.411 "method": "nvmf_create_subsystem", 00:16:42.411 "req_id": 1 00:16:42.411 } 00:16:42.411 Got JSON-RPC error response 00:16:42.411 response: 00:16:42.411 { 00:16:42.411 "code": -32602, 00:16:42.411 "message": "Invalid cntlid range [65520-65519]" 00:16:42.411 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:42.411 10:11:55 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15181 -I 0 00:16:42.411 [2024-04-24 10:11:55.680466] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15181: invalid cntlid range [1-0] 00:16:42.670 10:11:55 -- target/invalid.sh@77 -- # out='request: 00:16:42.670 { 00:16:42.670 "nqn": "nqn.2016-06.io.spdk:cnode15181", 00:16:42.670 "max_cntlid": 0, 00:16:42.670 "method": "nvmf_create_subsystem", 00:16:42.670 "req_id": 1 00:16:42.670 } 00:16:42.670 Got JSON-RPC error response 00:16:42.670 response: 00:16:42.670 { 00:16:42.670 "code": -32602, 00:16:42.670 "message": "Invalid cntlid range [1-0]" 00:16:42.670 }' 00:16:42.670 10:11:55 -- target/invalid.sh@78 -- # [[ request: 00:16:42.670 { 00:16:42.670 "nqn": "nqn.2016-06.io.spdk:cnode15181", 00:16:42.670 "max_cntlid": 0, 00:16:42.670 "method": "nvmf_create_subsystem", 00:16:42.670 "req_id": 1 00:16:42.670 } 00:16:42.670 Got JSON-RPC error response 00:16:42.670 response: 00:16:42.670 { 00:16:42.670 "code": -32602, 00:16:42.670 "message": "Invalid cntlid range [1-0]" 00:16:42.670 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:42.670 10:11:55 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11361 -I 65520 00:16:42.670 [2024-04-24 10:11:55.849039] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11361: invalid cntlid range [1-65520] 00:16:42.670 10:11:55 -- target/invalid.sh@79 -- # out='request: 00:16:42.670 { 00:16:42.670 "nqn": "nqn.2016-06.io.spdk:cnode11361", 00:16:42.670 "max_cntlid": 65520, 00:16:42.670 "method": "nvmf_create_subsystem", 00:16:42.670 "req_id": 1 00:16:42.670 } 00:16:42.670 Got JSON-RPC error response 00:16:42.670 response: 00:16:42.670 { 00:16:42.670 "code": -32602, 00:16:42.670 "message": "Invalid cntlid range [1-65520]" 00:16:42.670 }' 00:16:42.670 10:11:55 -- target/invalid.sh@80 -- # [[ request: 00:16:42.670 { 00:16:42.670 "nqn": "nqn.2016-06.io.spdk:cnode11361", 00:16:42.670 "max_cntlid": 65520, 00:16:42.670 "method": "nvmf_create_subsystem", 00:16:42.670 "req_id": 1 00:16:42.670 } 00:16:42.670 Got JSON-RPC error response 00:16:42.670 response: 00:16:42.670 { 00:16:42.670 "code": -32602, 00:16:42.670 "message": "Invalid cntlid range [1-65520]" 00:16:42.670 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:42.670 10:11:55 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25231 -i 6 -I 5 00:16:42.929 [2024-04-24 10:11:56.021622] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25231: invalid cntlid range [6-5] 00:16:42.929 10:11:56 -- target/invalid.sh@83 -- # out='request: 00:16:42.929 { 00:16:42.929 "nqn": "nqn.2016-06.io.spdk:cnode25231", 00:16:42.929 "min_cntlid": 6, 00:16:42.929 "max_cntlid": 5, 00:16:42.929 "method": "nvmf_create_subsystem", 00:16:42.929 "req_id": 1 00:16:42.929 } 00:16:42.929 Got JSON-RPC error response 00:16:42.929 response: 00:16:42.929 { 00:16:42.929 "code": -32602, 00:16:42.929 "message": "Invalid cntlid range [6-5]" 00:16:42.929 }' 00:16:42.929 10:11:56 -- target/invalid.sh@84 -- # [[ request: 00:16:42.929 { 00:16:42.929 "nqn": "nqn.2016-06.io.spdk:cnode25231", 00:16:42.929 "min_cntlid": 6, 00:16:42.929 "max_cntlid": 5, 00:16:42.929 "method": "nvmf_create_subsystem", 00:16:42.929 "req_id": 1 00:16:42.929 } 00:16:42.929 Got JSON-RPC error response 00:16:42.929 response: 00:16:42.929 { 00:16:42.929 "code": -32602, 00:16:42.929 "message": "Invalid cntlid range [6-5]" 00:16:42.929 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:42.929 10:11:56 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:42.929 10:11:56 -- target/invalid.sh@87 -- # out='request: 00:16:42.929 { 00:16:42.929 "name": "foobar", 00:16:42.929 "method": "nvmf_delete_target", 00:16:42.929 "req_id": 1 00:16:42.929 } 00:16:42.929 Got JSON-RPC error response 00:16:42.929 response: 00:16:42.929 { 00:16:42.929 "code": -32602, 00:16:42.929 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:42.929 }' 00:16:42.929 10:11:56 -- target/invalid.sh@88 -- # [[ request: 00:16:42.929 { 00:16:42.929 "name": "foobar", 00:16:42.929 "method": "nvmf_delete_target", 00:16:42.929 "req_id": 1 00:16:42.929 } 00:16:42.929 Got JSON-RPC error response 00:16:42.929 response: 00:16:42.929 { 00:16:42.929 "code": -32602, 00:16:42.929 "message": "The specified target doesn't exist, cannot delete it." 00:16:42.929 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:42.929 10:11:56 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:42.929 10:11:56 -- target/invalid.sh@91 -- # nvmftestfini 00:16:42.929 10:11:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.929 10:11:56 -- nvmf/common.sh@116 -- # sync 00:16:42.929 10:11:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:42.929 10:11:56 -- nvmf/common.sh@119 -- # set +e 00:16:42.929 10:11:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.929 10:11:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:42.929 rmmod nvme_tcp 00:16:42.929 rmmod nvme_fabrics 00:16:42.929 rmmod nvme_keyring 00:16:42.929 10:11:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.929 10:11:56 -- nvmf/common.sh@123 -- # set -e 00:16:42.929 10:11:56 -- nvmf/common.sh@124 -- # return 0 00:16:42.929 10:11:56 -- nvmf/common.sh@477 -- # '[' -n 247948 ']' 00:16:42.929 10:11:56 -- nvmf/common.sh@478 -- # killprocess 247948 00:16:42.929 10:11:56 -- common/autotest_common.sh@926 -- # '[' -z 247948 ']' 00:16:42.929 10:11:56 -- common/autotest_common.sh@930 -- # kill -0 247948 00:16:42.929 10:11:56 -- common/autotest_common.sh@931 -- # uname 00:16:42.929 10:11:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:42.929 10:11:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 247948 00:16:43.188 10:11:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:43.188 10:11:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:43.188 10:11:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 247948' 00:16:43.188 killing process with pid 247948 00:16:43.188 10:11:56 -- common/autotest_common.sh@945 -- # kill 247948 00:16:43.188 10:11:56 -- common/autotest_common.sh@950 -- # wait 247948 00:16:43.188 10:11:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:43.188 10:11:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:43.188 10:11:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:43.188 10:11:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.188 10:11:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:43.188 10:11:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.188 10:11:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.188 10:11:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.724 10:11:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:45.724 00:16:45.724 real 0m11.766s 00:16:45.724 user 0m19.067s 00:16:45.724 sys 0m5.024s 00:16:45.724 10:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.724 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:16:45.724 ************************************ 00:16:45.724 END TEST nvmf_invalid 00:16:45.724 ************************************ 00:16:45.725 10:11:58 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:45.725 10:11:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:45.725 10:11:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:45.725 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:16:45.725 ************************************ 00:16:45.725 START TEST nvmf_abort 00:16:45.725 ************************************ 00:16:45.725 10:11:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:45.725 * Looking for test storage... 00:16:45.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.725 10:11:58 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.725 10:11:58 -- nvmf/common.sh@7 -- # uname -s 00:16:45.725 10:11:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.725 10:11:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.725 10:11:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.725 10:11:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.725 10:11:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.725 10:11:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.725 10:11:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.725 10:11:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.725 10:11:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.725 10:11:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.725 10:11:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.725 10:11:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.725 10:11:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.725 10:11:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.725 10:11:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.725 10:11:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.725 10:11:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.725 10:11:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.725 10:11:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.725 10:11:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.725 10:11:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.725 10:11:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.725 10:11:58 -- paths/export.sh@5 -- # export PATH 00:16:45.725 10:11:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.725 10:11:58 -- nvmf/common.sh@46 -- # : 0 00:16:45.725 10:11:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:45.725 10:11:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:45.725 10:11:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:45.725 10:11:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.725 10:11:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.725 10:11:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:45.725 10:11:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:45.725 10:11:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:45.725 10:11:58 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.725 10:11:58 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:16:45.725 10:11:58 -- target/abort.sh@14 -- # nvmftestinit 00:16:45.725 10:11:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:45.725 10:11:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.725 10:11:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:45.725 10:11:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:45.725 10:11:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:45.725 10:11:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.725 10:11:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.725 10:11:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.725 10:11:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:45.725 10:11:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:45.725 10:11:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:45.725 10:11:58 -- common/autotest_common.sh@10 -- # set +x 00:16:50.999 10:12:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:50.999 10:12:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:50.999 10:12:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:50.999 10:12:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:50.999 10:12:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:50.999 10:12:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:50.999 10:12:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:50.999 10:12:03 -- nvmf/common.sh@294 -- # net_devs=() 00:16:50.999 10:12:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:50.999 10:12:03 -- nvmf/common.sh@295 -- # e810=() 00:16:50.999 10:12:03 -- nvmf/common.sh@295 -- # local -ga e810 00:16:50.999 10:12:03 -- nvmf/common.sh@296 -- # x722=() 00:16:50.999 10:12:03 -- nvmf/common.sh@296 -- # local -ga x722 00:16:50.999 10:12:03 -- nvmf/common.sh@297 -- # mlx=() 00:16:50.999 10:12:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:50.999 10:12:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.999 10:12:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.000 10:12:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:51.000 10:12:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:51.000 10:12:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:51.000 10:12:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:51.000 10:12:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:51.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:51.000 10:12:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:51.000 10:12:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:51.000 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:51.000 10:12:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:51.000 10:12:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:51.000 10:12:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.000 10:12:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:51.000 10:12:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.000 10:12:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:51.000 Found net devices under 0000:86:00.0: cvl_0_0 00:16:51.000 10:12:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.000 10:12:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:51.000 10:12:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.000 10:12:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:51.000 10:12:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.000 10:12:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:51.000 Found net devices under 0000:86:00.1: cvl_0_1 00:16:51.000 10:12:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.000 10:12:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:51.000 10:12:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:51.000 10:12:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:51.000 10:12:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.000 10:12:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.000 10:12:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.000 10:12:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:51.000 10:12:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.000 10:12:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.000 10:12:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:51.000 10:12:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.000 10:12:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.000 10:12:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:51.000 10:12:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:51.000 10:12:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.000 10:12:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.000 10:12:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.000 10:12:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.000 10:12:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:51.000 10:12:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.000 10:12:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.000 10:12:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.000 10:12:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:51.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:16:51.000 00:16:51.000 --- 10.0.0.2 ping statistics --- 00:16:51.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.000 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:51.000 10:12:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:16:51.000 00:16:51.000 --- 10.0.0.1 ping statistics --- 00:16:51.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.000 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:16:51.000 10:12:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.000 10:12:03 -- nvmf/common.sh@410 -- # return 0 00:16:51.000 10:12:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:51.000 10:12:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.000 10:12:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:51.000 10:12:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.000 10:12:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:51.000 10:12:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:51.000 10:12:03 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:16:51.000 10:12:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:51.000 10:12:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:51.000 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:16:51.000 10:12:03 -- nvmf/common.sh@469 -- # nvmfpid=252257 00:16:51.000 10:12:03 -- nvmf/common.sh@470 -- # waitforlisten 252257 00:16:51.000 10:12:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:51.000 10:12:03 -- common/autotest_common.sh@819 -- # '[' -z 252257 ']' 00:16:51.000 10:12:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.000 10:12:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.000 10:12:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.000 10:12:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.000 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:16:51.000 [2024-04-24 10:12:03.803722] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:16:51.000 [2024-04-24 10:12:03.803766] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.000 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.000 [2024-04-24 10:12:03.860458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:51.000 [2024-04-24 10:12:03.938215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:51.000 [2024-04-24 10:12:03.938320] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.000 [2024-04-24 10:12:03.938328] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.000 [2024-04-24 10:12:03.938334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.000 [2024-04-24 10:12:03.938422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.000 [2024-04-24 10:12:03.938506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.000 [2024-04-24 10:12:03.938507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.567 10:12:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:51.567 10:12:04 -- common/autotest_common.sh@852 -- # return 0 00:16:51.567 10:12:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:51.567 10:12:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:51.567 10:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.567 10:12:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.567 10:12:04 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:16:51.567 10:12:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.567 10:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.567 [2024-04-24 10:12:04.659881] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.567 10:12:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.567 10:12:04 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:16:51.567 10:12:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.567 10:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.567 Malloc0 00:16:51.567 10:12:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.567 10:12:04 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:51.567 10:12:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.567 10:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.567 Delay0 00:16:51.567 10:12:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.567 10:12:04 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:51.567 10:12:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.567 10:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.567 10:12:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.567 10:12:04 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:16:51.567 10:12:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.567 10:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.567 10:12:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.567 10:12:04 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:51.567 10:12:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.567 10:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.567 [2024-04-24 10:12:04.730932] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.567 10:12:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.567 10:12:04 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:51.567 10:12:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.567 10:12:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.567 10:12:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.567 10:12:04 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:16:51.567 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.567 [2024-04-24 10:12:04.821450] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:54.099 Initializing NVMe Controllers 00:16:54.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:54.099 controller IO queue size 128 less than required 00:16:54.099 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:54.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:54.099 Initialization complete. Launching workers. 00:16:54.099 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 42613 00:16:54.099 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42678, failed to submit 62 00:16:54.099 success 42613, unsuccess 65, failed 0 00:16:54.099 10:12:06 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:54.099 10:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.099 10:12:06 -- common/autotest_common.sh@10 -- # set +x 00:16:54.099 10:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.099 10:12:06 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:54.099 10:12:06 -- target/abort.sh@38 -- # nvmftestfini 00:16:54.099 10:12:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:54.099 10:12:06 -- nvmf/common.sh@116 -- # sync 00:16:54.099 10:12:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:54.099 10:12:06 -- nvmf/common.sh@119 -- # set +e 00:16:54.099 10:12:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:54.099 10:12:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:54.099 rmmod nvme_tcp 00:16:54.099 rmmod nvme_fabrics 00:16:54.099 rmmod nvme_keyring 00:16:54.099 10:12:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:54.099 10:12:07 -- nvmf/common.sh@123 -- # set -e 00:16:54.099 10:12:07 -- nvmf/common.sh@124 -- # return 0 00:16:54.099 10:12:07 -- nvmf/common.sh@477 -- # '[' -n 252257 ']' 00:16:54.099 10:12:07 -- nvmf/common.sh@478 -- # killprocess 252257 00:16:54.099 10:12:07 -- common/autotest_common.sh@926 -- # '[' -z 252257 ']' 00:16:54.099 10:12:07 -- common/autotest_common.sh@930 -- # kill -0 252257 00:16:54.100 10:12:07 -- common/autotest_common.sh@931 -- # uname 00:16:54.100 10:12:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:54.100 10:12:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 252257 00:16:54.100 10:12:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:54.100 10:12:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:54.100 10:12:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 252257' 00:16:54.100 killing process with pid 252257 00:16:54.100 10:12:07 -- common/autotest_common.sh@945 -- # kill 252257 00:16:54.100 10:12:07 -- common/autotest_common.sh@950 -- # wait 252257 00:16:54.100 10:12:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:54.100 10:12:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:54.100 10:12:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:54.100 10:12:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.100 10:12:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:54.100 10:12:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.100 10:12:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.100 10:12:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.635 10:12:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:56.635 00:16:56.635 real 0m10.798s 00:16:56.635 user 0m13.122s 00:16:56.635 sys 0m4.790s 00:16:56.635 10:12:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.635 10:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.635 ************************************ 00:16:56.635 END TEST nvmf_abort 00:16:56.635 ************************************ 00:16:56.635 10:12:09 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:56.635 10:12:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:56.635 10:12:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:56.635 10:12:09 -- common/autotest_common.sh@10 -- # set +x 00:16:56.635 ************************************ 00:16:56.635 START TEST nvmf_ns_hotplug_stress 00:16:56.635 ************************************ 00:16:56.635 10:12:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:56.635 * Looking for test storage... 00:16:56.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.635 10:12:09 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.635 10:12:09 -- nvmf/common.sh@7 -- # uname -s 00:16:56.635 10:12:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.635 10:12:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.635 10:12:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.635 10:12:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.635 10:12:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.635 10:12:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.635 10:12:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.635 10:12:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.635 10:12:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.635 10:12:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.635 10:12:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.635 10:12:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.635 10:12:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.635 10:12:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.635 10:12:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.635 10:12:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.635 10:12:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.635 10:12:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.635 10:12:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.635 10:12:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.635 10:12:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.635 10:12:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.635 10:12:09 -- paths/export.sh@5 -- # export PATH 00:16:56.635 10:12:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.635 10:12:09 -- nvmf/common.sh@46 -- # : 0 00:16:56.635 10:12:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:56.635 10:12:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:56.635 10:12:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:56.635 10:12:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.635 10:12:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.635 10:12:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:56.635 10:12:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:56.635 10:12:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:56.635 10:12:09 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:56.635 10:12:09 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:16:56.635 10:12:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:56.635 10:12:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.635 10:12:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:56.635 10:12:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:56.635 10:12:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:56.635 10:12:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.635 10:12:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.635 10:12:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.635 10:12:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:56.635 10:12:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:56.635 10:12:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:56.635 10:12:09 -- common/autotest_common.sh@10 -- # set +x 00:17:01.906 10:12:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:01.906 10:12:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:01.906 10:12:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:01.906 10:12:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:01.906 10:12:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:01.906 10:12:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:01.906 10:12:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:01.906 10:12:14 -- nvmf/common.sh@294 -- # net_devs=() 00:17:01.906 10:12:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:01.906 10:12:14 -- nvmf/common.sh@295 -- # e810=() 00:17:01.906 10:12:14 -- nvmf/common.sh@295 -- # local -ga e810 00:17:01.906 10:12:14 -- nvmf/common.sh@296 -- # x722=() 00:17:01.906 10:12:14 -- nvmf/common.sh@296 -- # local -ga x722 00:17:01.906 10:12:14 -- nvmf/common.sh@297 -- # mlx=() 00:17:01.906 10:12:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:01.906 10:12:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.906 10:12:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:01.906 10:12:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:01.906 10:12:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:01.906 10:12:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.906 10:12:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:01.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:01.906 10:12:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.906 10:12:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:01.906 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:01.906 10:12:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:01.906 10:12:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:01.906 10:12:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.906 10:12:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.906 10:12:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.906 10:12:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.906 10:12:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:01.906 Found net devices under 0000:86:00.0: cvl_0_0 00:17:01.907 10:12:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.907 10:12:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.907 10:12:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.907 10:12:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.907 10:12:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.907 10:12:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:01.907 Found net devices under 0000:86:00.1: cvl_0_1 00:17:01.907 10:12:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.907 10:12:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:01.907 10:12:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:01.907 10:12:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:01.907 10:12:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:01.907 10:12:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:01.907 10:12:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.907 10:12:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.907 10:12:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.907 10:12:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:01.907 10:12:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.907 10:12:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.907 10:12:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:01.907 10:12:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.907 10:12:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.907 10:12:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:01.907 10:12:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:01.907 10:12:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.907 10:12:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.907 10:12:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.907 10:12:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.907 10:12:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:01.907 10:12:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.907 10:12:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.907 10:12:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.907 10:12:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:01.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:17:01.907 00:17:01.907 --- 10.0.0.2 ping statistics --- 00:17:01.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.907 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:17:01.907 10:12:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:17:01.907 00:17:01.907 --- 10.0.0.1 ping statistics --- 00:17:01.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.907 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:17:01.907 10:12:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.907 10:12:14 -- nvmf/common.sh@410 -- # return 0 00:17:01.907 10:12:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:01.907 10:12:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.907 10:12:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:01.907 10:12:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:01.907 10:12:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.907 10:12:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:01.907 10:12:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:01.907 10:12:14 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:17:01.907 10:12:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:01.907 10:12:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:01.907 10:12:14 -- common/autotest_common.sh@10 -- # set +x 00:17:01.907 10:12:14 -- nvmf/common.sh@469 -- # nvmfpid=256680 00:17:01.907 10:12:14 -- nvmf/common.sh@470 -- # waitforlisten 256680 00:17:01.907 10:12:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:01.907 10:12:14 -- common/autotest_common.sh@819 -- # '[' -z 256680 ']' 00:17:01.907 10:12:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.907 10:12:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.907 10:12:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.907 10:12:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.907 10:12:14 -- common/autotest_common.sh@10 -- # set +x 00:17:01.907 [2024-04-24 10:12:14.894087] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:17:01.907 [2024-04-24 10:12:14.894152] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.907 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.907 [2024-04-24 10:12:14.950855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.907 [2024-04-24 10:12:15.028558] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:01.907 [2024-04-24 10:12:15.028666] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.907 [2024-04-24 10:12:15.028675] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.907 [2024-04-24 10:12:15.028681] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.907 [2024-04-24 10:12:15.028775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.907 [2024-04-24 10:12:15.028858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.907 [2024-04-24 10:12:15.028860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.473 10:12:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:02.473 10:12:15 -- common/autotest_common.sh@852 -- # return 0 00:17:02.473 10:12:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:02.473 10:12:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:02.473 10:12:15 -- common/autotest_common.sh@10 -- # set +x 00:17:02.473 10:12:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.473 10:12:15 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:17:02.473 10:12:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:02.731 [2024-04-24 10:12:15.894656] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.731 10:12:15 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:02.989 10:12:16 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.989 [2024-04-24 10:12:16.259995] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.247 10:12:16 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:03.247 10:12:16 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:17:03.505 Malloc0 00:17:03.506 10:12:16 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:03.764 Delay0 00:17:03.764 10:12:16 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:03.764 10:12:17 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:17:04.022 NULL1 00:17:04.022 10:12:17 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:04.280 10:12:17 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:17:04.280 10:12:17 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=257178 00:17:04.280 10:12:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:04.280 10:12:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.280 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.280 10:12:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:04.539 10:12:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:17:04.539 10:12:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:17:04.797 true 00:17:04.797 10:12:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:04.797 10:12:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.055 10:12:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:05.055 10:12:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:17:05.055 10:12:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:17:05.314 true 00:17:05.314 10:12:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:05.314 10:12:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.572 10:12:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:05.830 10:12:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:17:05.830 10:12:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:17:05.830 true 00:17:05.830 10:12:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:05.830 10:12:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:06.089 10:12:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:06.347 10:12:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:17:06.347 10:12:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:17:06.347 true 00:17:06.347 10:12:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:06.347 10:12:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:06.605 10:12:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:06.863 10:12:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:17:06.863 10:12:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:17:07.121 true 00:17:07.121 10:12:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:07.122 10:12:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.380 10:12:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:07.380 10:12:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:17:07.380 10:12:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:17:07.638 true 00:17:07.638 10:12:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:07.638 10:12:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.897 10:12:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:07.897 10:12:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:17:07.897 10:12:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:17:08.155 true 00:17:08.155 10:12:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:08.155 10:12:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:08.415 10:12:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:08.714 10:12:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:17:08.714 10:12:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:17:08.714 true 00:17:08.715 10:12:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:08.715 10:12:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:08.986 10:12:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:08.986 10:12:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:17:08.986 10:12:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:17:09.243 true 00:17:09.243 10:12:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:09.243 10:12:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.500 10:12:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:09.758 10:12:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:17:09.758 10:12:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:17:09.758 true 00:17:09.758 10:12:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:09.758 10:12:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.016 10:12:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:10.274 10:12:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:17:10.274 10:12:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:17:10.274 true 00:17:10.274 10:12:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:10.274 10:12:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.532 10:12:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:10.789 10:12:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:17:10.789 10:12:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:17:11.048 true 00:17:11.048 10:12:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:11.048 10:12:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.048 10:12:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:11.307 10:12:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:17:11.307 10:12:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:17:11.566 true 00:17:11.566 10:12:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:11.566 10:12:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.566 10:12:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:11.824 10:12:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:17:11.824 10:12:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:17:12.082 true 00:17:12.082 10:12:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:12.082 10:12:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:12.341 10:12:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:12.599 10:12:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:17:12.599 10:12:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:17:12.599 true 00:17:12.599 10:12:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:12.599 10:12:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:12.858 10:12:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:13.117 10:12:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:17:13.117 10:12:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:17:13.117 true 00:17:13.117 10:12:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:13.117 10:12:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.376 10:12:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:13.635 10:12:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:17:13.635 10:12:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:17:13.635 true 00:17:13.894 10:12:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:13.894 10:12:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.894 10:12:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:14.153 10:12:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:17:14.153 10:12:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:17:14.412 true 00:17:14.412 10:12:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:14.412 10:12:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:14.412 10:12:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:14.671 10:12:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:17:14.671 10:12:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:17:14.931 true 00:17:14.931 10:12:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:14.931 10:12:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:14.931 10:12:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:15.190 10:12:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:17:15.190 10:12:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:17:15.448 true 00:17:15.448 10:12:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:15.448 10:12:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:15.708 10:12:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:15.708 10:12:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:17:15.708 10:12:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:17:15.966 true 00:17:15.966 10:12:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:15.966 10:12:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:16.224 10:12:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:16.224 10:12:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:17:16.224 10:12:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:17:16.483 true 00:17:16.483 10:12:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:16.483 10:12:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:16.741 10:12:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:16.999 10:12:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:17:16.999 10:12:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:17:16.999 true 00:17:16.999 10:12:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:16.999 10:12:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.257 10:12:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:17.514 10:12:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:17:17.515 10:12:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:17:17.515 true 00:17:17.817 10:12:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:17.817 10:12:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.817 10:12:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:18.075 10:12:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:17:18.075 10:12:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:17:18.075 true 00:17:18.075 10:12:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:18.075 10:12:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.333 10:12:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:18.591 10:12:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:17:18.591 10:12:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:17:18.850 true 00:17:18.850 10:12:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:18.850 10:12:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.850 10:12:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:19.109 10:12:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:17:19.109 10:12:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:17:19.369 true 00:17:19.369 10:12:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:19.369 10:12:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.629 10:12:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:19.629 10:12:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:17:19.629 10:12:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:17:19.887 true 00:17:19.887 10:12:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:19.887 10:12:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:20.146 10:12:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:20.404 10:12:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:17:20.404 10:12:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:17:20.404 true 00:17:20.404 10:12:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:20.404 10:12:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:20.663 10:12:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:20.922 10:12:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:17:20.922 10:12:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:17:20.922 true 00:17:21.180 10:12:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:21.180 10:12:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:21.180 10:12:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:21.438 10:12:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:17:21.438 10:12:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:17:21.438 true 00:17:21.697 10:12:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:21.697 10:12:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:21.697 10:12:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:21.955 10:12:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:17:21.955 10:12:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:17:22.213 true 00:17:22.213 10:12:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:22.213 10:12:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.213 10:12:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:22.471 10:12:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:17:22.471 10:12:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:17:22.730 true 00:17:22.730 10:12:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:22.730 10:12:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.988 10:12:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:22.988 10:12:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:17:22.988 10:12:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:17:23.246 true 00:17:23.246 10:12:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:23.246 10:12:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.504 10:12:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:23.762 10:12:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:17:23.762 10:12:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:17:24.020 true 00:17:24.020 10:12:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:24.020 10:12:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.020 10:12:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:24.278 10:12:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:17:24.278 10:12:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:17:24.536 true 00:17:24.536 10:12:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:24.536 10:12:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.536 10:12:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:24.795 10:12:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:17:24.795 10:12:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:17:25.055 true 00:17:25.055 10:12:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:25.055 10:12:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.314 10:12:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:25.314 10:12:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:17:25.314 10:12:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:17:25.573 true 00:17:25.573 10:12:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:25.573 10:12:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.832 10:12:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:25.832 10:12:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:17:25.832 10:12:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:17:26.091 true 00:17:26.091 10:12:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:26.091 10:12:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.350 10:12:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:26.609 10:12:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:17:26.609 10:12:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:17:26.609 true 00:17:26.609 10:12:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:26.609 10:12:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.868 10:12:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:27.127 10:12:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:17:27.127 10:12:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:17:27.127 true 00:17:27.127 10:12:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:27.127 10:12:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.386 10:12:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:27.645 10:12:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:17:27.645 10:12:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:17:27.645 true 00:17:27.645 10:12:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:27.645 10:12:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.904 10:12:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:28.164 10:12:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:17:28.164 10:12:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:17:28.164 true 00:17:28.423 10:12:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:28.423 10:12:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.423 10:12:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:28.682 10:12:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:17:28.682 10:12:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:17:28.941 true 00:17:28.941 10:12:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:28.941 10:12:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.200 10:12:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:29.201 10:12:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:17:29.201 10:12:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:17:29.459 true 00:17:29.459 10:12:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:29.459 10:12:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.718 10:12:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:29.718 10:12:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:17:29.718 10:12:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:17:29.977 true 00:17:29.977 10:12:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:29.977 10:12:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.237 10:12:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:30.495 10:12:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:17:30.495 10:12:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:17:30.495 true 00:17:30.495 10:12:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:30.495 10:12:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.754 10:12:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:31.013 10:12:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:17:31.013 10:12:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:17:31.013 true 00:17:31.013 10:12:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:31.013 10:12:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.271 10:12:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:31.553 10:12:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:17:31.553 10:12:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:17:31.824 true 00:17:31.824 10:12:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:31.824 10:12:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.824 10:12:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:32.083 10:12:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:17:32.083 10:12:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:17:32.342 true 00:17:32.342 10:12:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:32.342 10:12:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:32.342 10:12:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:32.601 10:12:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:17:32.601 10:12:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:17:32.860 true 00:17:32.860 10:12:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:32.860 10:12:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.119 10:12:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:33.119 10:12:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:17:33.119 10:12:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:17:33.379 true 00:17:33.379 10:12:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:33.379 10:12:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.638 10:12:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:33.897 10:12:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:17:33.897 10:12:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:17:33.897 true 00:17:33.897 10:12:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:33.897 10:12:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.156 10:12:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:34.415 10:12:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:17:34.415 10:12:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:17:34.415 true 00:17:34.674 10:12:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:34.674 10:12:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.674 Initializing NVMe Controllers 00:17:34.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:34.674 Controller IO queue size 128, less than required. 00:17:34.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:34.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:34.674 Initialization complete. Launching workers. 00:17:34.674 ======================================================== 00:17:34.674 Latency(us) 00:17:34.674 Device Information : IOPS MiB/s Average min max 00:17:34.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29040.08 14.18 4407.54 1546.12 7331.39 00:17:34.674 ======================================================== 00:17:34.674 Total : 29040.08 14.18 4407.54 1546.12 7331.39 00:17:34.674 00:17:34.674 10:12:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:34.934 10:12:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:17:34.934 10:12:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:17:35.193 true 00:17:35.193 10:12:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 257178 00:17:35.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (257178) - No such process 00:17:35.193 10:12:48 -- target/ns_hotplug_stress.sh@44 -- # wait 257178 00:17:35.193 10:12:48 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:35.193 10:12:48 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:17:35.193 10:12:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:35.193 10:12:48 -- nvmf/common.sh@116 -- # sync 00:17:35.193 10:12:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:35.193 10:12:48 -- nvmf/common.sh@119 -- # set +e 00:17:35.193 10:12:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:35.193 10:12:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:35.193 rmmod nvme_tcp 00:17:35.193 rmmod nvme_fabrics 00:17:35.193 rmmod nvme_keyring 00:17:35.193 10:12:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:35.193 10:12:48 -- nvmf/common.sh@123 -- # set -e 00:17:35.193 10:12:48 -- nvmf/common.sh@124 -- # return 0 00:17:35.193 10:12:48 -- nvmf/common.sh@477 -- # '[' -n 256680 ']' 00:17:35.193 10:12:48 -- nvmf/common.sh@478 -- # killprocess 256680 00:17:35.193 10:12:48 -- common/autotest_common.sh@926 -- # '[' -z 256680 ']' 00:17:35.193 10:12:48 -- common/autotest_common.sh@930 -- # kill -0 256680 00:17:35.193 10:12:48 -- common/autotest_common.sh@931 -- # uname 00:17:35.193 10:12:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:35.193 10:12:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 256680 00:17:35.193 10:12:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:35.193 10:12:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:35.193 10:12:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 256680' 00:17:35.193 killing process with pid 256680 00:17:35.193 10:12:48 -- common/autotest_common.sh@945 -- # kill 256680 00:17:35.193 10:12:48 -- common/autotest_common.sh@950 -- # wait 256680 00:17:35.453 10:12:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:35.453 10:12:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:35.453 10:12:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:35.453 10:12:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.453 10:12:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:35.453 10:12:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.453 10:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.453 10:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.360 10:12:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:37.360 00:17:37.360 real 0m41.243s 00:17:37.360 user 2m36.644s 00:17:37.360 sys 0m12.342s 00:17:37.360 10:12:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.360 10:12:50 -- common/autotest_common.sh@10 -- # set +x 00:17:37.360 ************************************ 00:17:37.360 END TEST nvmf_ns_hotplug_stress 00:17:37.360 ************************************ 00:17:37.620 10:12:50 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:37.620 10:12:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:37.620 10:12:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:37.620 10:12:50 -- common/autotest_common.sh@10 -- # set +x 00:17:37.620 ************************************ 00:17:37.620 START TEST nvmf_connect_stress 00:17:37.620 ************************************ 00:17:37.620 10:12:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:37.620 * Looking for test storage... 00:17:37.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.620 10:12:50 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.620 10:12:50 -- nvmf/common.sh@7 -- # uname -s 00:17:37.620 10:12:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.620 10:12:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.620 10:12:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.620 10:12:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.620 10:12:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.620 10:12:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.620 10:12:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.620 10:12:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.620 10:12:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.620 10:12:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.620 10:12:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.620 10:12:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.620 10:12:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.620 10:12:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.620 10:12:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.620 10:12:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.620 10:12:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.620 10:12:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.620 10:12:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.620 10:12:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.620 10:12:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.620 10:12:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.620 10:12:50 -- paths/export.sh@5 -- # export PATH 00:17:37.620 10:12:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.620 10:12:50 -- nvmf/common.sh@46 -- # : 0 00:17:37.620 10:12:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:37.620 10:12:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:37.620 10:12:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:37.620 10:12:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.620 10:12:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.620 10:12:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:37.620 10:12:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:37.620 10:12:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:37.620 10:12:50 -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:37.620 10:12:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:37.620 10:12:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.620 10:12:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:37.621 10:12:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:37.621 10:12:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:37.621 10:12:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.621 10:12:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.621 10:12:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.621 10:12:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:37.621 10:12:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:37.621 10:12:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:37.621 10:12:50 -- common/autotest_common.sh@10 -- # set +x 00:17:42.890 10:12:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:42.890 10:12:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:42.890 10:12:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:42.890 10:12:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:42.890 10:12:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:42.890 10:12:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:42.890 10:12:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:42.890 10:12:56 -- nvmf/common.sh@294 -- # net_devs=() 00:17:42.890 10:12:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:42.890 10:12:56 -- nvmf/common.sh@295 -- # e810=() 00:17:42.890 10:12:56 -- nvmf/common.sh@295 -- # local -ga e810 00:17:42.890 10:12:56 -- nvmf/common.sh@296 -- # x722=() 00:17:42.890 10:12:56 -- nvmf/common.sh@296 -- # local -ga x722 00:17:42.890 10:12:56 -- nvmf/common.sh@297 -- # mlx=() 00:17:42.890 10:12:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:42.890 10:12:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.890 10:12:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:42.890 10:12:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:42.890 10:12:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:42.890 10:12:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:42.890 10:12:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:42.890 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:42.890 10:12:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:42.890 10:12:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:42.890 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:42.890 10:12:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:42.890 10:12:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:42.890 10:12:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.890 10:12:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:42.890 10:12:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.890 10:12:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:42.890 Found net devices under 0000:86:00.0: cvl_0_0 00:17:42.890 10:12:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.890 10:12:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:42.890 10:12:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.890 10:12:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:42.890 10:12:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.890 10:12:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:42.890 Found net devices under 0000:86:00.1: cvl_0_1 00:17:42.890 10:12:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.890 10:12:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:42.890 10:12:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:42.890 10:12:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:42.890 10:12:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:42.890 10:12:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.890 10:12:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.890 10:12:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.890 10:12:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:42.890 10:12:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.890 10:12:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.890 10:12:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:42.890 10:12:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.890 10:12:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.890 10:12:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:42.890 10:12:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:42.890 10:12:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.890 10:12:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.148 10:12:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.148 10:12:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.148 10:12:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:43.148 10:12:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.148 10:12:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.148 10:12:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.148 10:12:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:43.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:17:43.148 00:17:43.148 --- 10.0.0.2 ping statistics --- 00:17:43.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.148 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:17:43.148 10:12:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:17:43.148 00:17:43.148 --- 10.0.0.1 ping statistics --- 00:17:43.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.148 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:17:43.148 10:12:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.148 10:12:56 -- nvmf/common.sh@410 -- # return 0 00:17:43.148 10:12:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:43.148 10:12:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.148 10:12:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:43.148 10:12:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:43.148 10:12:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.148 10:12:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:43.148 10:12:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:43.148 10:12:56 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:43.148 10:12:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:43.148 10:12:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:43.148 10:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:43.148 10:12:56 -- nvmf/common.sh@469 -- # nvmfpid=266003 00:17:43.148 10:12:56 -- nvmf/common.sh@470 -- # waitforlisten 266003 00:17:43.148 10:12:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:43.148 10:12:56 -- common/autotest_common.sh@819 -- # '[' -z 266003 ']' 00:17:43.148 10:12:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.148 10:12:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:43.148 10:12:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.148 10:12:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:43.148 10:12:56 -- common/autotest_common.sh@10 -- # set +x 00:17:43.407 [2024-04-24 10:12:56.430352] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:17:43.407 [2024-04-24 10:12:56.430393] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.407 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.407 [2024-04-24 10:12:56.486607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.407 [2024-04-24 10:12:56.556837] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:43.407 [2024-04-24 10:12:56.556949] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.407 [2024-04-24 10:12:56.556957] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.407 [2024-04-24 10:12:56.556963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.407 [2024-04-24 10:12:56.557067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.407 [2024-04-24 10:12:56.557159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.407 [2024-04-24 10:12:56.557160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.974 10:12:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.974 10:12:57 -- common/autotest_common.sh@852 -- # return 0 00:17:43.974 10:12:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:43.974 10:12:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:43.974 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:17:44.232 10:12:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.232 10:12:57 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.232 10:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.232 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:17:44.232 [2024-04-24 10:12:57.270563] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.232 10:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.233 10:12:57 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:44.233 10:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.233 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:17:44.233 10:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.233 10:12:57 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.233 10:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.233 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:17:44.233 [2024-04-24 10:12:57.307205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.233 10:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.233 10:12:57 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:44.233 10:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.233 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:17:44.233 NULL1 00:17:44.233 10:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.233 10:12:57 -- target/connect_stress.sh@21 -- # PERF_PID=266251 00:17:44.233 10:12:57 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:44.233 10:12:57 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:44.233 10:12:57 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # seq 1 20 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.233 10:12:57 -- target/connect_stress.sh@28 -- # cat 00:17:44.233 10:12:57 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:44.233 10:12:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.233 10:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.233 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:17:44.492 10:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.492 10:12:57 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:44.492 10:12:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.492 10:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.492 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:17:45.058 10:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.058 10:12:58 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:45.058 10:12:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.058 10:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.058 10:12:58 -- common/autotest_common.sh@10 -- # set +x 00:17:45.317 10:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.317 10:12:58 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:45.317 10:12:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.317 10:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.317 10:12:58 -- common/autotest_common.sh@10 -- # set +x 00:17:45.575 10:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.575 10:12:58 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:45.575 10:12:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.575 10:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.575 10:12:58 -- common/autotest_common.sh@10 -- # set +x 00:17:45.835 10:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.835 10:12:59 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:45.835 10:12:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.835 10:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.835 10:12:59 -- common/autotest_common.sh@10 -- # set +x 00:17:46.094 10:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.094 10:12:59 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:46.094 10:12:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.094 10:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.094 10:12:59 -- common/autotest_common.sh@10 -- # set +x 00:17:46.661 10:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.661 10:12:59 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:46.661 10:12:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.661 10:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.661 10:12:59 -- common/autotest_common.sh@10 -- # set +x 00:17:46.921 10:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.921 10:12:59 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:46.921 10:12:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.921 10:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.921 10:12:59 -- common/autotest_common.sh@10 -- # set +x 00:17:47.179 10:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:47.179 10:13:00 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:47.179 10:13:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.179 10:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:47.179 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:17:47.438 10:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:47.438 10:13:00 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:47.438 10:13:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.438 10:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:47.438 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:17:47.696 10:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:47.696 10:13:00 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:47.696 10:13:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.696 10:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:47.696 10:13:00 -- common/autotest_common.sh@10 -- # set +x 00:17:48.264 10:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:48.264 10:13:01 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:48.264 10:13:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.264 10:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:48.264 10:13:01 -- common/autotest_common.sh@10 -- # set +x 00:17:48.650 10:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:48.650 10:13:01 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:48.650 10:13:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.650 10:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:48.650 10:13:01 -- common/autotest_common.sh@10 -- # set +x 00:17:48.650 10:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:48.650 10:13:01 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:48.650 10:13:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.650 10:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:48.650 10:13:01 -- common/autotest_common.sh@10 -- # set +x 00:17:49.218 10:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.218 10:13:02 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:49.218 10:13:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.218 10:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.218 10:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:49.477 10:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.477 10:13:02 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:49.477 10:13:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.477 10:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.477 10:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:49.736 10:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.736 10:13:02 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:49.736 10:13:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.736 10:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.736 10:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:49.995 10:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.995 10:13:03 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:49.995 10:13:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.995 10:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.995 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:50.253 10:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:50.253 10:13:03 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:50.253 10:13:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.253 10:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:50.253 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:50.822 10:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:50.822 10:13:03 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:50.822 10:13:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.822 10:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:50.822 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:17:51.080 10:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:51.080 10:13:04 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:51.080 10:13:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.080 10:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:51.080 10:13:04 -- common/autotest_common.sh@10 -- # set +x 00:17:51.339 10:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:51.339 10:13:04 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:51.339 10:13:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.339 10:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:51.339 10:13:04 -- common/autotest_common.sh@10 -- # set +x 00:17:51.598 10:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:51.598 10:13:04 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:51.598 10:13:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.598 10:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:51.598 10:13:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.166 10:13:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.166 10:13:05 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:52.166 10:13:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.166 10:13:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.166 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.425 10:13:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.425 10:13:05 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:52.425 10:13:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.425 10:13:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.425 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.684 10:13:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.684 10:13:05 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:52.684 10:13:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.684 10:13:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.684 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.943 10:13:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.943 10:13:06 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:52.943 10:13:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.943 10:13:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.943 10:13:06 -- common/autotest_common.sh@10 -- # set +x 00:17:53.201 10:13:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.201 10:13:06 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:53.201 10:13:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.201 10:13:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.201 10:13:06 -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 10:13:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.769 10:13:06 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:53.769 10:13:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.769 10:13:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.769 10:13:06 -- common/autotest_common.sh@10 -- # set +x 00:17:54.028 10:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.028 10:13:07 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:54.028 10:13:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.028 10:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.028 10:13:07 -- common/autotest_common.sh@10 -- # set +x 00:17:54.287 10:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.287 10:13:07 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:54.287 10:13:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.287 10:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.287 10:13:07 -- common/autotest_common.sh@10 -- # set +x 00:17:54.287 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:54.547 10:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.547 10:13:07 -- target/connect_stress.sh@34 -- # kill -0 266251 00:17:54.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (266251) - No such process 00:17:54.547 10:13:07 -- target/connect_stress.sh@38 -- # wait 266251 00:17:54.547 10:13:07 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:54.547 10:13:07 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:54.547 10:13:07 -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:54.547 10:13:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:54.547 10:13:07 -- nvmf/common.sh@116 -- # sync 00:17:54.547 10:13:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:54.547 10:13:07 -- nvmf/common.sh@119 -- # set +e 00:17:54.547 10:13:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:54.547 10:13:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:54.547 rmmod nvme_tcp 00:17:54.547 rmmod nvme_fabrics 00:17:54.547 rmmod nvme_keyring 00:17:54.547 10:13:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:54.547 10:13:07 -- nvmf/common.sh@123 -- # set -e 00:17:54.547 10:13:07 -- nvmf/common.sh@124 -- # return 0 00:17:54.547 10:13:07 -- nvmf/common.sh@477 -- # '[' -n 266003 ']' 00:17:54.547 10:13:07 -- nvmf/common.sh@478 -- # killprocess 266003 00:17:54.547 10:13:07 -- common/autotest_common.sh@926 -- # '[' -z 266003 ']' 00:17:54.547 10:13:07 -- common/autotest_common.sh@930 -- # kill -0 266003 00:17:54.547 10:13:07 -- common/autotest_common.sh@931 -- # uname 00:17:54.547 10:13:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.547 10:13:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 266003 00:17:54.547 10:13:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:54.547 10:13:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:54.547 10:13:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 266003' 00:17:54.547 killing process with pid 266003 00:17:54.547 10:13:07 -- common/autotest_common.sh@945 -- # kill 266003 00:17:54.547 10:13:07 -- common/autotest_common.sh@950 -- # wait 266003 00:17:54.806 10:13:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:54.806 10:13:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:54.806 10:13:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:54.806 10:13:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.806 10:13:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:54.806 10:13:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.806 10:13:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.806 10:13:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.345 10:13:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:57.345 00:17:57.345 real 0m19.428s 00:17:57.345 user 0m41.527s 00:17:57.345 sys 0m8.223s 00:17:57.345 10:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.345 10:13:10 -- common/autotest_common.sh@10 -- # set +x 00:17:57.345 ************************************ 00:17:57.345 END TEST nvmf_connect_stress 00:17:57.345 ************************************ 00:17:57.345 10:13:10 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:57.345 10:13:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:57.345 10:13:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:57.345 10:13:10 -- common/autotest_common.sh@10 -- # set +x 00:17:57.345 ************************************ 00:17:57.345 START TEST nvmf_fused_ordering 00:17:57.345 ************************************ 00:17:57.345 10:13:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:57.345 * Looking for test storage... 00:17:57.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.345 10:13:10 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.345 10:13:10 -- nvmf/common.sh@7 -- # uname -s 00:17:57.345 10:13:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.345 10:13:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.345 10:13:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.345 10:13:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.345 10:13:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.345 10:13:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.345 10:13:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.345 10:13:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.345 10:13:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.345 10:13:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.345 10:13:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.345 10:13:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.345 10:13:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.345 10:13:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.345 10:13:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.345 10:13:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.345 10:13:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.345 10:13:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.345 10:13:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.345 10:13:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.345 10:13:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.345 10:13:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.345 10:13:10 -- paths/export.sh@5 -- # export PATH 00:17:57.345 10:13:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.345 10:13:10 -- nvmf/common.sh@46 -- # : 0 00:17:57.345 10:13:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:57.345 10:13:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:57.345 10:13:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:57.345 10:13:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.345 10:13:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.345 10:13:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:57.345 10:13:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:57.345 10:13:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:57.345 10:13:10 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:57.345 10:13:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:57.345 10:13:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.345 10:13:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:57.345 10:13:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:57.345 10:13:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:57.345 10:13:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.345 10:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.345 10:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.345 10:13:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:57.345 10:13:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:57.345 10:13:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:57.345 10:13:10 -- common/autotest_common.sh@10 -- # set +x 00:18:01.535 10:13:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:01.535 10:13:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:01.535 10:13:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:01.535 10:13:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:01.535 10:13:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:01.535 10:13:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:01.535 10:13:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:01.535 10:13:14 -- nvmf/common.sh@294 -- # net_devs=() 00:18:01.535 10:13:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:01.535 10:13:14 -- nvmf/common.sh@295 -- # e810=() 00:18:01.535 10:13:14 -- nvmf/common.sh@295 -- # local -ga e810 00:18:01.535 10:13:14 -- nvmf/common.sh@296 -- # x722=() 00:18:01.535 10:13:14 -- nvmf/common.sh@296 -- # local -ga x722 00:18:01.535 10:13:14 -- nvmf/common.sh@297 -- # mlx=() 00:18:01.535 10:13:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:01.535 10:13:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.535 10:13:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:01.535 10:13:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:01.535 10:13:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:01.535 10:13:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:01.535 10:13:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:01.535 10:13:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:01.535 10:13:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:01.535 10:13:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:01.535 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:01.535 10:13:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:01.535 10:13:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:01.535 10:13:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.535 10:13:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.535 10:13:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:01.536 10:13:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:01.536 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:01.536 10:13:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:01.536 10:13:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:01.536 10:13:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.536 10:13:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:01.536 10:13:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.536 10:13:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:01.536 Found net devices under 0000:86:00.0: cvl_0_0 00:18:01.536 10:13:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.536 10:13:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:01.536 10:13:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.536 10:13:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:01.536 10:13:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.536 10:13:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:01.536 Found net devices under 0000:86:00.1: cvl_0_1 00:18:01.536 10:13:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.536 10:13:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:01.536 10:13:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:01.536 10:13:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:01.536 10:13:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:01.536 10:13:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.536 10:13:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.536 10:13:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.536 10:13:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:01.536 10:13:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.536 10:13:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.536 10:13:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:01.536 10:13:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.536 10:13:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.536 10:13:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:01.536 10:13:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:01.536 10:13:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.536 10:13:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.795 10:13:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.795 10:13:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.795 10:13:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:01.795 10:13:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.795 10:13:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.795 10:13:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.795 10:13:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:01.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:18:01.795 00:18:01.795 --- 10.0.0.2 ping statistics --- 00:18:01.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.795 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:18:01.795 10:13:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:18:01.795 00:18:01.795 --- 10.0.0.1 ping statistics --- 00:18:01.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.795 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:18:01.795 10:13:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.795 10:13:15 -- nvmf/common.sh@410 -- # return 0 00:18:01.795 10:13:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:01.795 10:13:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.795 10:13:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:01.795 10:13:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:01.795 10:13:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.795 10:13:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:01.795 10:13:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:01.795 10:13:15 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:01.795 10:13:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:01.795 10:13:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:01.795 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:01.795 10:13:15 -- nvmf/common.sh@469 -- # nvmfpid=271397 00:18:01.795 10:13:15 -- nvmf/common.sh@470 -- # waitforlisten 271397 00:18:01.795 10:13:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.795 10:13:15 -- common/autotest_common.sh@819 -- # '[' -z 271397 ']' 00:18:01.795 10:13:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.795 10:13:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:01.795 10:13:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.795 10:13:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:01.795 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:02.054 [2024-04-24 10:13:15.083314] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:02.054 [2024-04-24 10:13:15.083353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.054 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.054 [2024-04-24 10:13:15.141484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.054 [2024-04-24 10:13:15.210653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:02.054 [2024-04-24 10:13:15.210765] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.054 [2024-04-24 10:13:15.210772] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.054 [2024-04-24 10:13:15.210778] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.054 [2024-04-24 10:13:15.210792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.618 10:13:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:02.618 10:13:15 -- common/autotest_common.sh@852 -- # return 0 00:18:02.618 10:13:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:02.618 10:13:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:02.618 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:02.877 10:13:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.877 10:13:15 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:02.877 10:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.877 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:02.877 [2024-04-24 10:13:15.912445] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.877 10:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.877 10:13:15 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:02.877 10:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.877 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:02.877 10:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.877 10:13:15 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.877 10:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.877 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:02.877 [2024-04-24 10:13:15.932600] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.877 10:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.877 10:13:15 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:02.877 10:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.877 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:02.877 NULL1 00:18:02.877 10:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.877 10:13:15 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:02.877 10:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.877 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:02.877 10:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.877 10:13:15 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:02.877 10:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:02.877 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:18:02.877 10:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:02.877 10:13:15 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:02.877 [2024-04-24 10:13:15.986915] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:02.877 [2024-04-24 10:13:15.986958] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid271474 ] 00:18:02.877 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.445 Attached to nqn.2016-06.io.spdk:cnode1 00:18:03.445 Namespace ID: 1 size: 1GB 00:18:03.445 fused_ordering(0) 00:18:03.445 fused_ordering(1) 00:18:03.445 fused_ordering(2) 00:18:03.445 fused_ordering(3) 00:18:03.445 fused_ordering(4) 00:18:03.445 fused_ordering(5) 00:18:03.445 fused_ordering(6) 00:18:03.445 fused_ordering(7) 00:18:03.445 fused_ordering(8) 00:18:03.445 fused_ordering(9) 00:18:03.445 fused_ordering(10) 00:18:03.445 fused_ordering(11) 00:18:03.445 fused_ordering(12) 00:18:03.445 fused_ordering(13) 00:18:03.445 fused_ordering(14) 00:18:03.445 fused_ordering(15) 00:18:03.445 fused_ordering(16) 00:18:03.445 fused_ordering(17) 00:18:03.445 fused_ordering(18) 00:18:03.445 fused_ordering(19) 00:18:03.445 fused_ordering(20) 00:18:03.445 fused_ordering(21) 00:18:03.445 fused_ordering(22) 00:18:03.445 fused_ordering(23) 00:18:03.445 fused_ordering(24) 00:18:03.445 fused_ordering(25) 00:18:03.445 fused_ordering(26) 00:18:03.445 fused_ordering(27) 00:18:03.445 fused_ordering(28) 00:18:03.445 fused_ordering(29) 00:18:03.445 fused_ordering(30) 00:18:03.445 fused_ordering(31) 00:18:03.445 fused_ordering(32) 00:18:03.445 fused_ordering(33) 00:18:03.445 fused_ordering(34) 00:18:03.445 fused_ordering(35) 00:18:03.445 fused_ordering(36) 00:18:03.445 fused_ordering(37) 00:18:03.445 fused_ordering(38) 00:18:03.445 fused_ordering(39) 00:18:03.445 fused_ordering(40) 00:18:03.445 fused_ordering(41) 00:18:03.445 fused_ordering(42) 00:18:03.445 fused_ordering(43) 00:18:03.445 fused_ordering(44) 00:18:03.445 fused_ordering(45) 00:18:03.445 fused_ordering(46) 00:18:03.445 fused_ordering(47) 00:18:03.445 fused_ordering(48) 00:18:03.445 fused_ordering(49) 00:18:03.445 fused_ordering(50) 00:18:03.445 fused_ordering(51) 00:18:03.445 fused_ordering(52) 00:18:03.445 fused_ordering(53) 00:18:03.445 fused_ordering(54) 00:18:03.445 fused_ordering(55) 00:18:03.445 fused_ordering(56) 00:18:03.445 fused_ordering(57) 00:18:03.445 fused_ordering(58) 00:18:03.445 fused_ordering(59) 00:18:03.445 fused_ordering(60) 00:18:03.445 fused_ordering(61) 00:18:03.445 fused_ordering(62) 00:18:03.445 fused_ordering(63) 00:18:03.445 fused_ordering(64) 00:18:03.445 fused_ordering(65) 00:18:03.445 fused_ordering(66) 00:18:03.445 fused_ordering(67) 00:18:03.445 fused_ordering(68) 00:18:03.446 fused_ordering(69) 00:18:03.446 fused_ordering(70) 00:18:03.446 fused_ordering(71) 00:18:03.446 fused_ordering(72) 00:18:03.446 fused_ordering(73) 00:18:03.446 fused_ordering(74) 00:18:03.446 fused_ordering(75) 00:18:03.446 fused_ordering(76) 00:18:03.446 fused_ordering(77) 00:18:03.446 fused_ordering(78) 00:18:03.446 fused_ordering(79) 00:18:03.446 fused_ordering(80) 00:18:03.446 fused_ordering(81) 00:18:03.446 fused_ordering(82) 00:18:03.446 fused_ordering(83) 00:18:03.446 fused_ordering(84) 00:18:03.446 fused_ordering(85) 00:18:03.446 fused_ordering(86) 00:18:03.446 fused_ordering(87) 00:18:03.446 fused_ordering(88) 00:18:03.446 fused_ordering(89) 00:18:03.446 fused_ordering(90) 00:18:03.446 fused_ordering(91) 00:18:03.446 fused_ordering(92) 00:18:03.446 fused_ordering(93) 00:18:03.446 fused_ordering(94) 00:18:03.446 fused_ordering(95) 00:18:03.446 fused_ordering(96) 00:18:03.446 fused_ordering(97) 00:18:03.446 fused_ordering(98) 00:18:03.446 fused_ordering(99) 00:18:03.446 fused_ordering(100) 00:18:03.446 fused_ordering(101) 00:18:03.446 fused_ordering(102) 00:18:03.446 fused_ordering(103) 00:18:03.446 fused_ordering(104) 00:18:03.446 fused_ordering(105) 00:18:03.446 fused_ordering(106) 00:18:03.446 fused_ordering(107) 00:18:03.446 fused_ordering(108) 00:18:03.446 fused_ordering(109) 00:18:03.446 fused_ordering(110) 00:18:03.446 fused_ordering(111) 00:18:03.446 fused_ordering(112) 00:18:03.446 fused_ordering(113) 00:18:03.446 fused_ordering(114) 00:18:03.446 fused_ordering(115) 00:18:03.446 fused_ordering(116) 00:18:03.446 fused_ordering(117) 00:18:03.446 fused_ordering(118) 00:18:03.446 fused_ordering(119) 00:18:03.446 fused_ordering(120) 00:18:03.446 fused_ordering(121) 00:18:03.446 fused_ordering(122) 00:18:03.446 fused_ordering(123) 00:18:03.446 fused_ordering(124) 00:18:03.446 fused_ordering(125) 00:18:03.446 fused_ordering(126) 00:18:03.446 fused_ordering(127) 00:18:03.446 fused_ordering(128) 00:18:03.446 fused_ordering(129) 00:18:03.446 fused_ordering(130) 00:18:03.446 fused_ordering(131) 00:18:03.446 fused_ordering(132) 00:18:03.446 fused_ordering(133) 00:18:03.446 fused_ordering(134) 00:18:03.446 fused_ordering(135) 00:18:03.446 fused_ordering(136) 00:18:03.446 fused_ordering(137) 00:18:03.446 fused_ordering(138) 00:18:03.446 fused_ordering(139) 00:18:03.446 fused_ordering(140) 00:18:03.446 fused_ordering(141) 00:18:03.446 fused_ordering(142) 00:18:03.446 fused_ordering(143) 00:18:03.446 fused_ordering(144) 00:18:03.446 fused_ordering(145) 00:18:03.446 fused_ordering(146) 00:18:03.446 fused_ordering(147) 00:18:03.446 fused_ordering(148) 00:18:03.446 fused_ordering(149) 00:18:03.446 fused_ordering(150) 00:18:03.446 fused_ordering(151) 00:18:03.446 fused_ordering(152) 00:18:03.446 fused_ordering(153) 00:18:03.446 fused_ordering(154) 00:18:03.446 fused_ordering(155) 00:18:03.446 fused_ordering(156) 00:18:03.446 fused_ordering(157) 00:18:03.446 fused_ordering(158) 00:18:03.446 fused_ordering(159) 00:18:03.446 fused_ordering(160) 00:18:03.446 fused_ordering(161) 00:18:03.446 fused_ordering(162) 00:18:03.446 fused_ordering(163) 00:18:03.446 fused_ordering(164) 00:18:03.446 fused_ordering(165) 00:18:03.446 fused_ordering(166) 00:18:03.446 fused_ordering(167) 00:18:03.446 fused_ordering(168) 00:18:03.446 fused_ordering(169) 00:18:03.446 fused_ordering(170) 00:18:03.446 fused_ordering(171) 00:18:03.446 fused_ordering(172) 00:18:03.446 fused_ordering(173) 00:18:03.446 fused_ordering(174) 00:18:03.446 fused_ordering(175) 00:18:03.446 fused_ordering(176) 00:18:03.446 fused_ordering(177) 00:18:03.446 fused_ordering(178) 00:18:03.446 fused_ordering(179) 00:18:03.446 fused_ordering(180) 00:18:03.446 fused_ordering(181) 00:18:03.446 fused_ordering(182) 00:18:03.446 fused_ordering(183) 00:18:03.446 fused_ordering(184) 00:18:03.446 fused_ordering(185) 00:18:03.446 fused_ordering(186) 00:18:03.446 fused_ordering(187) 00:18:03.446 fused_ordering(188) 00:18:03.446 fused_ordering(189) 00:18:03.446 fused_ordering(190) 00:18:03.446 fused_ordering(191) 00:18:03.446 fused_ordering(192) 00:18:03.446 fused_ordering(193) 00:18:03.446 fused_ordering(194) 00:18:03.446 fused_ordering(195) 00:18:03.446 fused_ordering(196) 00:18:03.446 fused_ordering(197) 00:18:03.446 fused_ordering(198) 00:18:03.446 fused_ordering(199) 00:18:03.446 fused_ordering(200) 00:18:03.446 fused_ordering(201) 00:18:03.446 fused_ordering(202) 00:18:03.446 fused_ordering(203) 00:18:03.446 fused_ordering(204) 00:18:03.446 fused_ordering(205) 00:18:03.705 fused_ordering(206) 00:18:03.705 fused_ordering(207) 00:18:03.705 fused_ordering(208) 00:18:03.705 fused_ordering(209) 00:18:03.705 fused_ordering(210) 00:18:03.705 fused_ordering(211) 00:18:03.705 fused_ordering(212) 00:18:03.705 fused_ordering(213) 00:18:03.705 fused_ordering(214) 00:18:03.705 fused_ordering(215) 00:18:03.705 fused_ordering(216) 00:18:03.705 fused_ordering(217) 00:18:03.705 fused_ordering(218) 00:18:03.705 fused_ordering(219) 00:18:03.705 fused_ordering(220) 00:18:03.705 fused_ordering(221) 00:18:03.705 fused_ordering(222) 00:18:03.705 fused_ordering(223) 00:18:03.705 fused_ordering(224) 00:18:03.705 fused_ordering(225) 00:18:03.705 fused_ordering(226) 00:18:03.705 fused_ordering(227) 00:18:03.705 fused_ordering(228) 00:18:03.705 fused_ordering(229) 00:18:03.705 fused_ordering(230) 00:18:03.705 fused_ordering(231) 00:18:03.705 fused_ordering(232) 00:18:03.705 fused_ordering(233) 00:18:03.705 fused_ordering(234) 00:18:03.705 fused_ordering(235) 00:18:03.705 fused_ordering(236) 00:18:03.705 fused_ordering(237) 00:18:03.705 fused_ordering(238) 00:18:03.705 fused_ordering(239) 00:18:03.705 fused_ordering(240) 00:18:03.705 fused_ordering(241) 00:18:03.705 fused_ordering(242) 00:18:03.705 fused_ordering(243) 00:18:03.705 fused_ordering(244) 00:18:03.705 fused_ordering(245) 00:18:03.705 fused_ordering(246) 00:18:03.705 fused_ordering(247) 00:18:03.705 fused_ordering(248) 00:18:03.705 fused_ordering(249) 00:18:03.705 fused_ordering(250) 00:18:03.705 fused_ordering(251) 00:18:03.705 fused_ordering(252) 00:18:03.705 fused_ordering(253) 00:18:03.705 fused_ordering(254) 00:18:03.705 fused_ordering(255) 00:18:03.705 fused_ordering(256) 00:18:03.705 fused_ordering(257) 00:18:03.705 fused_ordering(258) 00:18:03.705 fused_ordering(259) 00:18:03.705 fused_ordering(260) 00:18:03.705 fused_ordering(261) 00:18:03.705 fused_ordering(262) 00:18:03.705 fused_ordering(263) 00:18:03.705 fused_ordering(264) 00:18:03.705 fused_ordering(265) 00:18:03.705 fused_ordering(266) 00:18:03.705 fused_ordering(267) 00:18:03.705 fused_ordering(268) 00:18:03.705 fused_ordering(269) 00:18:03.705 fused_ordering(270) 00:18:03.705 fused_ordering(271) 00:18:03.705 fused_ordering(272) 00:18:03.705 fused_ordering(273) 00:18:03.705 fused_ordering(274) 00:18:03.705 fused_ordering(275) 00:18:03.705 fused_ordering(276) 00:18:03.705 fused_ordering(277) 00:18:03.705 fused_ordering(278) 00:18:03.705 fused_ordering(279) 00:18:03.705 fused_ordering(280) 00:18:03.705 fused_ordering(281) 00:18:03.705 fused_ordering(282) 00:18:03.705 fused_ordering(283) 00:18:03.705 fused_ordering(284) 00:18:03.705 fused_ordering(285) 00:18:03.705 fused_ordering(286) 00:18:03.705 fused_ordering(287) 00:18:03.705 fused_ordering(288) 00:18:03.705 fused_ordering(289) 00:18:03.705 fused_ordering(290) 00:18:03.705 fused_ordering(291) 00:18:03.705 fused_ordering(292) 00:18:03.705 fused_ordering(293) 00:18:03.705 fused_ordering(294) 00:18:03.705 fused_ordering(295) 00:18:03.705 fused_ordering(296) 00:18:03.705 fused_ordering(297) 00:18:03.705 fused_ordering(298) 00:18:03.705 fused_ordering(299) 00:18:03.705 fused_ordering(300) 00:18:03.706 fused_ordering(301) 00:18:03.706 fused_ordering(302) 00:18:03.706 fused_ordering(303) 00:18:03.706 fused_ordering(304) 00:18:03.706 fused_ordering(305) 00:18:03.706 fused_ordering(306) 00:18:03.706 fused_ordering(307) 00:18:03.706 fused_ordering(308) 00:18:03.706 fused_ordering(309) 00:18:03.706 fused_ordering(310) 00:18:03.706 fused_ordering(311) 00:18:03.706 fused_ordering(312) 00:18:03.706 fused_ordering(313) 00:18:03.706 fused_ordering(314) 00:18:03.706 fused_ordering(315) 00:18:03.706 fused_ordering(316) 00:18:03.706 fused_ordering(317) 00:18:03.706 fused_ordering(318) 00:18:03.706 fused_ordering(319) 00:18:03.706 fused_ordering(320) 00:18:03.706 fused_ordering(321) 00:18:03.706 fused_ordering(322) 00:18:03.706 fused_ordering(323) 00:18:03.706 fused_ordering(324) 00:18:03.706 fused_ordering(325) 00:18:03.706 fused_ordering(326) 00:18:03.706 fused_ordering(327) 00:18:03.706 fused_ordering(328) 00:18:03.706 fused_ordering(329) 00:18:03.706 fused_ordering(330) 00:18:03.706 fused_ordering(331) 00:18:03.706 fused_ordering(332) 00:18:03.706 fused_ordering(333) 00:18:03.706 fused_ordering(334) 00:18:03.706 fused_ordering(335) 00:18:03.706 fused_ordering(336) 00:18:03.706 fused_ordering(337) 00:18:03.706 fused_ordering(338) 00:18:03.706 fused_ordering(339) 00:18:03.706 fused_ordering(340) 00:18:03.706 fused_ordering(341) 00:18:03.706 fused_ordering(342) 00:18:03.706 fused_ordering(343) 00:18:03.706 fused_ordering(344) 00:18:03.706 fused_ordering(345) 00:18:03.706 fused_ordering(346) 00:18:03.706 fused_ordering(347) 00:18:03.706 fused_ordering(348) 00:18:03.706 fused_ordering(349) 00:18:03.706 fused_ordering(350) 00:18:03.706 fused_ordering(351) 00:18:03.706 fused_ordering(352) 00:18:03.706 fused_ordering(353) 00:18:03.706 fused_ordering(354) 00:18:03.706 fused_ordering(355) 00:18:03.706 fused_ordering(356) 00:18:03.706 fused_ordering(357) 00:18:03.706 fused_ordering(358) 00:18:03.706 fused_ordering(359) 00:18:03.706 fused_ordering(360) 00:18:03.706 fused_ordering(361) 00:18:03.706 fused_ordering(362) 00:18:03.706 fused_ordering(363) 00:18:03.706 fused_ordering(364) 00:18:03.706 fused_ordering(365) 00:18:03.706 fused_ordering(366) 00:18:03.706 fused_ordering(367) 00:18:03.706 fused_ordering(368) 00:18:03.706 fused_ordering(369) 00:18:03.706 fused_ordering(370) 00:18:03.706 fused_ordering(371) 00:18:03.706 fused_ordering(372) 00:18:03.706 fused_ordering(373) 00:18:03.706 fused_ordering(374) 00:18:03.706 fused_ordering(375) 00:18:03.706 fused_ordering(376) 00:18:03.706 fused_ordering(377) 00:18:03.706 fused_ordering(378) 00:18:03.706 fused_ordering(379) 00:18:03.706 fused_ordering(380) 00:18:03.706 fused_ordering(381) 00:18:03.706 fused_ordering(382) 00:18:03.706 fused_ordering(383) 00:18:03.706 fused_ordering(384) 00:18:03.706 fused_ordering(385) 00:18:03.706 fused_ordering(386) 00:18:03.706 fused_ordering(387) 00:18:03.706 fused_ordering(388) 00:18:03.706 fused_ordering(389) 00:18:03.706 fused_ordering(390) 00:18:03.706 fused_ordering(391) 00:18:03.706 fused_ordering(392) 00:18:03.706 fused_ordering(393) 00:18:03.706 fused_ordering(394) 00:18:03.706 fused_ordering(395) 00:18:03.706 fused_ordering(396) 00:18:03.706 fused_ordering(397) 00:18:03.706 fused_ordering(398) 00:18:03.706 fused_ordering(399) 00:18:03.706 fused_ordering(400) 00:18:03.706 fused_ordering(401) 00:18:03.706 fused_ordering(402) 00:18:03.706 fused_ordering(403) 00:18:03.706 fused_ordering(404) 00:18:03.706 fused_ordering(405) 00:18:03.706 fused_ordering(406) 00:18:03.706 fused_ordering(407) 00:18:03.706 fused_ordering(408) 00:18:03.706 fused_ordering(409) 00:18:03.706 fused_ordering(410) 00:18:03.966 fused_ordering(411) 00:18:03.966 fused_ordering(412) 00:18:03.966 fused_ordering(413) 00:18:03.966 fused_ordering(414) 00:18:03.966 fused_ordering(415) 00:18:03.966 fused_ordering(416) 00:18:03.966 fused_ordering(417) 00:18:03.966 fused_ordering(418) 00:18:03.966 fused_ordering(419) 00:18:03.966 fused_ordering(420) 00:18:03.966 fused_ordering(421) 00:18:03.966 fused_ordering(422) 00:18:03.966 fused_ordering(423) 00:18:03.966 fused_ordering(424) 00:18:03.966 fused_ordering(425) 00:18:03.966 fused_ordering(426) 00:18:03.966 fused_ordering(427) 00:18:03.966 fused_ordering(428) 00:18:03.966 fused_ordering(429) 00:18:03.966 fused_ordering(430) 00:18:03.966 fused_ordering(431) 00:18:03.966 fused_ordering(432) 00:18:03.966 fused_ordering(433) 00:18:03.966 fused_ordering(434) 00:18:03.966 fused_ordering(435) 00:18:03.966 fused_ordering(436) 00:18:03.966 fused_ordering(437) 00:18:03.966 fused_ordering(438) 00:18:03.966 fused_ordering(439) 00:18:03.966 fused_ordering(440) 00:18:03.966 fused_ordering(441) 00:18:03.966 fused_ordering(442) 00:18:03.966 fused_ordering(443) 00:18:03.966 fused_ordering(444) 00:18:03.966 fused_ordering(445) 00:18:03.966 fused_ordering(446) 00:18:03.966 fused_ordering(447) 00:18:03.966 fused_ordering(448) 00:18:03.966 fused_ordering(449) 00:18:03.966 fused_ordering(450) 00:18:03.966 fused_ordering(451) 00:18:03.966 fused_ordering(452) 00:18:03.966 fused_ordering(453) 00:18:03.966 fused_ordering(454) 00:18:03.966 fused_ordering(455) 00:18:03.966 fused_ordering(456) 00:18:03.966 fused_ordering(457) 00:18:03.966 fused_ordering(458) 00:18:03.966 fused_ordering(459) 00:18:03.966 fused_ordering(460) 00:18:03.966 fused_ordering(461) 00:18:03.966 fused_ordering(462) 00:18:03.966 fused_ordering(463) 00:18:03.966 fused_ordering(464) 00:18:03.966 fused_ordering(465) 00:18:03.966 fused_ordering(466) 00:18:03.966 fused_ordering(467) 00:18:03.966 fused_ordering(468) 00:18:03.966 fused_ordering(469) 00:18:03.966 fused_ordering(470) 00:18:03.966 fused_ordering(471) 00:18:03.966 fused_ordering(472) 00:18:03.966 fused_ordering(473) 00:18:03.966 fused_ordering(474) 00:18:03.966 fused_ordering(475) 00:18:03.966 fused_ordering(476) 00:18:03.966 fused_ordering(477) 00:18:03.966 fused_ordering(478) 00:18:03.966 fused_ordering(479) 00:18:03.966 fused_ordering(480) 00:18:03.966 fused_ordering(481) 00:18:03.966 fused_ordering(482) 00:18:03.966 fused_ordering(483) 00:18:03.966 fused_ordering(484) 00:18:03.966 fused_ordering(485) 00:18:03.966 fused_ordering(486) 00:18:03.966 fused_ordering(487) 00:18:03.966 fused_ordering(488) 00:18:03.966 fused_ordering(489) 00:18:03.966 fused_ordering(490) 00:18:03.966 fused_ordering(491) 00:18:03.966 fused_ordering(492) 00:18:03.966 fused_ordering(493) 00:18:03.966 fused_ordering(494) 00:18:03.966 fused_ordering(495) 00:18:03.966 fused_ordering(496) 00:18:03.966 fused_ordering(497) 00:18:03.966 fused_ordering(498) 00:18:03.966 fused_ordering(499) 00:18:03.966 fused_ordering(500) 00:18:03.966 fused_ordering(501) 00:18:03.966 fused_ordering(502) 00:18:03.966 fused_ordering(503) 00:18:03.966 fused_ordering(504) 00:18:03.966 fused_ordering(505) 00:18:03.966 fused_ordering(506) 00:18:03.966 fused_ordering(507) 00:18:03.966 fused_ordering(508) 00:18:03.966 fused_ordering(509) 00:18:03.966 fused_ordering(510) 00:18:03.966 fused_ordering(511) 00:18:03.966 fused_ordering(512) 00:18:03.966 fused_ordering(513) 00:18:03.966 fused_ordering(514) 00:18:03.966 fused_ordering(515) 00:18:03.966 fused_ordering(516) 00:18:03.966 fused_ordering(517) 00:18:03.966 fused_ordering(518) 00:18:03.966 fused_ordering(519) 00:18:03.966 fused_ordering(520) 00:18:03.966 fused_ordering(521) 00:18:03.966 fused_ordering(522) 00:18:03.966 fused_ordering(523) 00:18:03.966 fused_ordering(524) 00:18:03.966 fused_ordering(525) 00:18:03.966 fused_ordering(526) 00:18:03.966 fused_ordering(527) 00:18:03.966 fused_ordering(528) 00:18:03.966 fused_ordering(529) 00:18:03.966 fused_ordering(530) 00:18:03.966 fused_ordering(531) 00:18:03.966 fused_ordering(532) 00:18:03.966 fused_ordering(533) 00:18:03.966 fused_ordering(534) 00:18:03.966 fused_ordering(535) 00:18:03.966 fused_ordering(536) 00:18:03.966 fused_ordering(537) 00:18:03.966 fused_ordering(538) 00:18:03.966 fused_ordering(539) 00:18:03.966 fused_ordering(540) 00:18:03.966 fused_ordering(541) 00:18:03.966 fused_ordering(542) 00:18:03.966 fused_ordering(543) 00:18:03.966 fused_ordering(544) 00:18:03.966 fused_ordering(545) 00:18:03.966 fused_ordering(546) 00:18:03.966 fused_ordering(547) 00:18:03.966 fused_ordering(548) 00:18:03.966 fused_ordering(549) 00:18:03.966 fused_ordering(550) 00:18:03.966 fused_ordering(551) 00:18:03.966 fused_ordering(552) 00:18:03.966 fused_ordering(553) 00:18:03.966 fused_ordering(554) 00:18:03.966 fused_ordering(555) 00:18:03.966 fused_ordering(556) 00:18:03.966 fused_ordering(557) 00:18:03.966 fused_ordering(558) 00:18:03.966 fused_ordering(559) 00:18:03.966 fused_ordering(560) 00:18:03.966 fused_ordering(561) 00:18:03.966 fused_ordering(562) 00:18:03.966 fused_ordering(563) 00:18:03.966 fused_ordering(564) 00:18:03.966 fused_ordering(565) 00:18:03.966 fused_ordering(566) 00:18:03.966 fused_ordering(567) 00:18:03.966 fused_ordering(568) 00:18:03.966 fused_ordering(569) 00:18:03.966 fused_ordering(570) 00:18:03.966 fused_ordering(571) 00:18:03.966 fused_ordering(572) 00:18:03.966 fused_ordering(573) 00:18:03.966 fused_ordering(574) 00:18:03.966 fused_ordering(575) 00:18:03.966 fused_ordering(576) 00:18:03.966 fused_ordering(577) 00:18:03.966 fused_ordering(578) 00:18:03.966 fused_ordering(579) 00:18:03.966 fused_ordering(580) 00:18:03.966 fused_ordering(581) 00:18:03.966 fused_ordering(582) 00:18:03.966 fused_ordering(583) 00:18:03.966 fused_ordering(584) 00:18:03.966 fused_ordering(585) 00:18:03.966 fused_ordering(586) 00:18:03.966 fused_ordering(587) 00:18:03.966 fused_ordering(588) 00:18:03.966 fused_ordering(589) 00:18:03.966 fused_ordering(590) 00:18:03.966 fused_ordering(591) 00:18:03.966 fused_ordering(592) 00:18:03.966 fused_ordering(593) 00:18:03.966 fused_ordering(594) 00:18:03.966 fused_ordering(595) 00:18:03.966 fused_ordering(596) 00:18:03.966 fused_ordering(597) 00:18:03.966 fused_ordering(598) 00:18:03.966 fused_ordering(599) 00:18:03.966 fused_ordering(600) 00:18:03.966 fused_ordering(601) 00:18:03.966 fused_ordering(602) 00:18:03.966 fused_ordering(603) 00:18:03.966 fused_ordering(604) 00:18:03.966 fused_ordering(605) 00:18:03.966 fused_ordering(606) 00:18:03.966 fused_ordering(607) 00:18:03.966 fused_ordering(608) 00:18:03.966 fused_ordering(609) 00:18:03.966 fused_ordering(610) 00:18:03.966 fused_ordering(611) 00:18:03.966 fused_ordering(612) 00:18:03.966 fused_ordering(613) 00:18:03.966 fused_ordering(614) 00:18:03.966 fused_ordering(615) 00:18:04.535 fused_ordering(616) 00:18:04.535 fused_ordering(617) 00:18:04.535 fused_ordering(618) 00:18:04.535 fused_ordering(619) 00:18:04.535 fused_ordering(620) 00:18:04.535 fused_ordering(621) 00:18:04.535 fused_ordering(622) 00:18:04.535 fused_ordering(623) 00:18:04.535 fused_ordering(624) 00:18:04.535 fused_ordering(625) 00:18:04.535 fused_ordering(626) 00:18:04.535 fused_ordering(627) 00:18:04.535 fused_ordering(628) 00:18:04.535 fused_ordering(629) 00:18:04.535 fused_ordering(630) 00:18:04.535 fused_ordering(631) 00:18:04.535 fused_ordering(632) 00:18:04.535 fused_ordering(633) 00:18:04.535 fused_ordering(634) 00:18:04.535 fused_ordering(635) 00:18:04.535 fused_ordering(636) 00:18:04.535 fused_ordering(637) 00:18:04.535 fused_ordering(638) 00:18:04.535 fused_ordering(639) 00:18:04.535 fused_ordering(640) 00:18:04.535 fused_ordering(641) 00:18:04.535 fused_ordering(642) 00:18:04.535 fused_ordering(643) 00:18:04.535 fused_ordering(644) 00:18:04.535 fused_ordering(645) 00:18:04.535 fused_ordering(646) 00:18:04.535 fused_ordering(647) 00:18:04.535 fused_ordering(648) 00:18:04.535 fused_ordering(649) 00:18:04.535 fused_ordering(650) 00:18:04.535 fused_ordering(651) 00:18:04.535 fused_ordering(652) 00:18:04.535 fused_ordering(653) 00:18:04.535 fused_ordering(654) 00:18:04.535 fused_ordering(655) 00:18:04.535 fused_ordering(656) 00:18:04.535 fused_ordering(657) 00:18:04.535 fused_ordering(658) 00:18:04.535 fused_ordering(659) 00:18:04.535 fused_ordering(660) 00:18:04.535 fused_ordering(661) 00:18:04.535 fused_ordering(662) 00:18:04.535 fused_ordering(663) 00:18:04.535 fused_ordering(664) 00:18:04.535 fused_ordering(665) 00:18:04.535 fused_ordering(666) 00:18:04.535 fused_ordering(667) 00:18:04.535 fused_ordering(668) 00:18:04.535 fused_ordering(669) 00:18:04.535 fused_ordering(670) 00:18:04.535 fused_ordering(671) 00:18:04.535 fused_ordering(672) 00:18:04.535 fused_ordering(673) 00:18:04.535 fused_ordering(674) 00:18:04.535 fused_ordering(675) 00:18:04.535 fused_ordering(676) 00:18:04.535 fused_ordering(677) 00:18:04.535 fused_ordering(678) 00:18:04.535 fused_ordering(679) 00:18:04.535 fused_ordering(680) 00:18:04.535 fused_ordering(681) 00:18:04.535 fused_ordering(682) 00:18:04.535 fused_ordering(683) 00:18:04.535 fused_ordering(684) 00:18:04.535 fused_ordering(685) 00:18:04.535 fused_ordering(686) 00:18:04.535 fused_ordering(687) 00:18:04.535 fused_ordering(688) 00:18:04.535 fused_ordering(689) 00:18:04.535 fused_ordering(690) 00:18:04.535 fused_ordering(691) 00:18:04.535 fused_ordering(692) 00:18:04.535 fused_ordering(693) 00:18:04.535 fused_ordering(694) 00:18:04.535 fused_ordering(695) 00:18:04.535 fused_ordering(696) 00:18:04.535 fused_ordering(697) 00:18:04.535 fused_ordering(698) 00:18:04.535 fused_ordering(699) 00:18:04.535 fused_ordering(700) 00:18:04.535 fused_ordering(701) 00:18:04.535 fused_ordering(702) 00:18:04.535 fused_ordering(703) 00:18:04.535 fused_ordering(704) 00:18:04.535 fused_ordering(705) 00:18:04.535 fused_ordering(706) 00:18:04.535 fused_ordering(707) 00:18:04.535 fused_ordering(708) 00:18:04.535 fused_ordering(709) 00:18:04.535 fused_ordering(710) 00:18:04.535 fused_ordering(711) 00:18:04.535 fused_ordering(712) 00:18:04.535 fused_ordering(713) 00:18:04.535 fused_ordering(714) 00:18:04.535 fused_ordering(715) 00:18:04.535 fused_ordering(716) 00:18:04.535 fused_ordering(717) 00:18:04.535 fused_ordering(718) 00:18:04.535 fused_ordering(719) 00:18:04.535 fused_ordering(720) 00:18:04.535 fused_ordering(721) 00:18:04.535 fused_ordering(722) 00:18:04.535 fused_ordering(723) 00:18:04.535 fused_ordering(724) 00:18:04.535 fused_ordering(725) 00:18:04.535 fused_ordering(726) 00:18:04.535 fused_ordering(727) 00:18:04.535 fused_ordering(728) 00:18:04.535 fused_ordering(729) 00:18:04.535 fused_ordering(730) 00:18:04.535 fused_ordering(731) 00:18:04.535 fused_ordering(732) 00:18:04.535 fused_ordering(733) 00:18:04.535 fused_ordering(734) 00:18:04.535 fused_ordering(735) 00:18:04.535 fused_ordering(736) 00:18:04.535 fused_ordering(737) 00:18:04.535 fused_ordering(738) 00:18:04.535 fused_ordering(739) 00:18:04.535 fused_ordering(740) 00:18:04.535 fused_ordering(741) 00:18:04.535 fused_ordering(742) 00:18:04.535 fused_ordering(743) 00:18:04.535 fused_ordering(744) 00:18:04.535 fused_ordering(745) 00:18:04.535 fused_ordering(746) 00:18:04.535 fused_ordering(747) 00:18:04.535 fused_ordering(748) 00:18:04.535 fused_ordering(749) 00:18:04.535 fused_ordering(750) 00:18:04.535 fused_ordering(751) 00:18:04.535 fused_ordering(752) 00:18:04.535 fused_ordering(753) 00:18:04.535 fused_ordering(754) 00:18:04.535 fused_ordering(755) 00:18:04.535 fused_ordering(756) 00:18:04.535 fused_ordering(757) 00:18:04.535 fused_ordering(758) 00:18:04.535 fused_ordering(759) 00:18:04.535 fused_ordering(760) 00:18:04.535 fused_ordering(761) 00:18:04.535 fused_ordering(762) 00:18:04.535 fused_ordering(763) 00:18:04.535 fused_ordering(764) 00:18:04.535 fused_ordering(765) 00:18:04.535 fused_ordering(766) 00:18:04.535 fused_ordering(767) 00:18:04.535 fused_ordering(768) 00:18:04.535 fused_ordering(769) 00:18:04.535 fused_ordering(770) 00:18:04.535 fused_ordering(771) 00:18:04.535 fused_ordering(772) 00:18:04.535 fused_ordering(773) 00:18:04.535 fused_ordering(774) 00:18:04.535 fused_ordering(775) 00:18:04.535 fused_ordering(776) 00:18:04.535 fused_ordering(777) 00:18:04.535 fused_ordering(778) 00:18:04.535 fused_ordering(779) 00:18:04.535 fused_ordering(780) 00:18:04.535 fused_ordering(781) 00:18:04.535 fused_ordering(782) 00:18:04.535 fused_ordering(783) 00:18:04.535 fused_ordering(784) 00:18:04.535 fused_ordering(785) 00:18:04.535 fused_ordering(786) 00:18:04.535 fused_ordering(787) 00:18:04.535 fused_ordering(788) 00:18:04.535 fused_ordering(789) 00:18:04.535 fused_ordering(790) 00:18:04.535 fused_ordering(791) 00:18:04.535 fused_ordering(792) 00:18:04.535 fused_ordering(793) 00:18:04.535 fused_ordering(794) 00:18:04.535 fused_ordering(795) 00:18:04.535 fused_ordering(796) 00:18:04.535 fused_ordering(797) 00:18:04.535 fused_ordering(798) 00:18:04.535 fused_ordering(799) 00:18:04.535 fused_ordering(800) 00:18:04.535 fused_ordering(801) 00:18:04.535 fused_ordering(802) 00:18:04.535 fused_ordering(803) 00:18:04.535 fused_ordering(804) 00:18:04.535 fused_ordering(805) 00:18:04.535 fused_ordering(806) 00:18:04.535 fused_ordering(807) 00:18:04.535 fused_ordering(808) 00:18:04.535 fused_ordering(809) 00:18:04.535 fused_ordering(810) 00:18:04.535 fused_ordering(811) 00:18:04.535 fused_ordering(812) 00:18:04.535 fused_ordering(813) 00:18:04.535 fused_ordering(814) 00:18:04.536 fused_ordering(815) 00:18:04.536 fused_ordering(816) 00:18:04.536 fused_ordering(817) 00:18:04.536 fused_ordering(818) 00:18:04.536 fused_ordering(819) 00:18:04.536 fused_ordering(820) 00:18:05.104 fused_ordering(821) 00:18:05.104 fused_ordering(822) 00:18:05.104 fused_ordering(823) 00:18:05.104 fused_ordering(824) 00:18:05.104 fused_ordering(825) 00:18:05.104 fused_ordering(826) 00:18:05.104 fused_ordering(827) 00:18:05.104 fused_ordering(828) 00:18:05.104 fused_ordering(829) 00:18:05.104 fused_ordering(830) 00:18:05.104 fused_ordering(831) 00:18:05.104 fused_ordering(832) 00:18:05.104 fused_ordering(833) 00:18:05.104 fused_ordering(834) 00:18:05.104 fused_ordering(835) 00:18:05.104 fused_ordering(836) 00:18:05.104 fused_ordering(837) 00:18:05.104 fused_ordering(838) 00:18:05.104 fused_ordering(839) 00:18:05.104 fused_ordering(840) 00:18:05.104 fused_ordering(841) 00:18:05.104 fused_ordering(842) 00:18:05.104 fused_ordering(843) 00:18:05.104 fused_ordering(844) 00:18:05.104 fused_ordering(845) 00:18:05.104 fused_ordering(846) 00:18:05.104 fused_ordering(847) 00:18:05.104 fused_ordering(848) 00:18:05.104 fused_ordering(849) 00:18:05.104 fused_ordering(850) 00:18:05.104 fused_ordering(851) 00:18:05.104 fused_ordering(852) 00:18:05.104 fused_ordering(853) 00:18:05.104 fused_ordering(854) 00:18:05.104 fused_ordering(855) 00:18:05.104 fused_ordering(856) 00:18:05.105 fused_ordering(857) 00:18:05.105 fused_ordering(858) 00:18:05.105 fused_ordering(859) 00:18:05.105 fused_ordering(860) 00:18:05.105 fused_ordering(861) 00:18:05.105 fused_ordering(862) 00:18:05.105 fused_ordering(863) 00:18:05.105 fused_ordering(864) 00:18:05.105 fused_ordering(865) 00:18:05.105 fused_ordering(866) 00:18:05.105 fused_ordering(867) 00:18:05.105 fused_ordering(868) 00:18:05.105 fused_ordering(869) 00:18:05.105 fused_ordering(870) 00:18:05.105 fused_ordering(871) 00:18:05.105 fused_ordering(872) 00:18:05.105 fused_ordering(873) 00:18:05.105 fused_ordering(874) 00:18:05.105 fused_ordering(875) 00:18:05.105 fused_ordering(876) 00:18:05.105 fused_ordering(877) 00:18:05.105 fused_ordering(878) 00:18:05.105 fused_ordering(879) 00:18:05.105 fused_ordering(880) 00:18:05.105 fused_ordering(881) 00:18:05.105 fused_ordering(882) 00:18:05.105 fused_ordering(883) 00:18:05.105 fused_ordering(884) 00:18:05.105 fused_ordering(885) 00:18:05.105 fused_ordering(886) 00:18:05.105 fused_ordering(887) 00:18:05.105 fused_ordering(888) 00:18:05.105 fused_ordering(889) 00:18:05.105 fused_ordering(890) 00:18:05.105 fused_ordering(891) 00:18:05.105 fused_ordering(892) 00:18:05.105 fused_ordering(893) 00:18:05.105 fused_ordering(894) 00:18:05.105 fused_ordering(895) 00:18:05.105 fused_ordering(896) 00:18:05.105 fused_ordering(897) 00:18:05.105 fused_ordering(898) 00:18:05.105 fused_ordering(899) 00:18:05.105 fused_ordering(900) 00:18:05.105 fused_ordering(901) 00:18:05.105 fused_ordering(902) 00:18:05.105 fused_ordering(903) 00:18:05.105 fused_ordering(904) 00:18:05.105 fused_ordering(905) 00:18:05.105 fused_ordering(906) 00:18:05.105 fused_ordering(907) 00:18:05.105 fused_ordering(908) 00:18:05.105 fused_ordering(909) 00:18:05.105 fused_ordering(910) 00:18:05.105 fused_ordering(911) 00:18:05.105 fused_ordering(912) 00:18:05.105 fused_ordering(913) 00:18:05.105 fused_ordering(914) 00:18:05.105 fused_ordering(915) 00:18:05.105 fused_ordering(916) 00:18:05.105 fused_ordering(917) 00:18:05.105 fused_ordering(918) 00:18:05.105 fused_ordering(919) 00:18:05.105 fused_ordering(920) 00:18:05.105 fused_ordering(921) 00:18:05.105 fused_ordering(922) 00:18:05.105 fused_ordering(923) 00:18:05.105 fused_ordering(924) 00:18:05.105 fused_ordering(925) 00:18:05.105 fused_ordering(926) 00:18:05.105 fused_ordering(927) 00:18:05.105 fused_ordering(928) 00:18:05.105 fused_ordering(929) 00:18:05.105 fused_ordering(930) 00:18:05.105 fused_ordering(931) 00:18:05.105 fused_ordering(932) 00:18:05.105 fused_ordering(933) 00:18:05.105 fused_ordering(934) 00:18:05.105 fused_ordering(935) 00:18:05.105 fused_ordering(936) 00:18:05.105 fused_ordering(937) 00:18:05.105 fused_ordering(938) 00:18:05.105 fused_ordering(939) 00:18:05.105 fused_ordering(940) 00:18:05.105 fused_ordering(941) 00:18:05.105 fused_ordering(942) 00:18:05.105 fused_ordering(943) 00:18:05.105 fused_ordering(944) 00:18:05.105 fused_ordering(945) 00:18:05.105 fused_ordering(946) 00:18:05.105 fused_ordering(947) 00:18:05.105 fused_ordering(948) 00:18:05.105 fused_ordering(949) 00:18:05.105 fused_ordering(950) 00:18:05.105 fused_ordering(951) 00:18:05.105 fused_ordering(952) 00:18:05.105 fused_ordering(953) 00:18:05.105 fused_ordering(954) 00:18:05.105 fused_ordering(955) 00:18:05.105 fused_ordering(956) 00:18:05.105 fused_ordering(957) 00:18:05.105 fused_ordering(958) 00:18:05.105 fused_ordering(959) 00:18:05.105 fused_ordering(960) 00:18:05.105 fused_ordering(961) 00:18:05.105 fused_ordering(962) 00:18:05.105 fused_ordering(963) 00:18:05.105 fused_ordering(964) 00:18:05.105 fused_ordering(965) 00:18:05.105 fused_ordering(966) 00:18:05.105 fused_ordering(967) 00:18:05.105 fused_ordering(968) 00:18:05.105 fused_ordering(969) 00:18:05.105 fused_ordering(970) 00:18:05.105 fused_ordering(971) 00:18:05.105 fused_ordering(972) 00:18:05.105 fused_ordering(973) 00:18:05.105 fused_ordering(974) 00:18:05.105 fused_ordering(975) 00:18:05.105 fused_ordering(976) 00:18:05.105 fused_ordering(977) 00:18:05.105 fused_ordering(978) 00:18:05.105 fused_ordering(979) 00:18:05.105 fused_ordering(980) 00:18:05.105 fused_ordering(981) 00:18:05.105 fused_ordering(982) 00:18:05.105 fused_ordering(983) 00:18:05.105 fused_ordering(984) 00:18:05.105 fused_ordering(985) 00:18:05.105 fused_ordering(986) 00:18:05.105 fused_ordering(987) 00:18:05.105 fused_ordering(988) 00:18:05.105 fused_ordering(989) 00:18:05.105 fused_ordering(990) 00:18:05.105 fused_ordering(991) 00:18:05.105 fused_ordering(992) 00:18:05.105 fused_ordering(993) 00:18:05.105 fused_ordering(994) 00:18:05.105 fused_ordering(995) 00:18:05.105 fused_ordering(996) 00:18:05.105 fused_ordering(997) 00:18:05.105 fused_ordering(998) 00:18:05.105 fused_ordering(999) 00:18:05.105 fused_ordering(1000) 00:18:05.105 fused_ordering(1001) 00:18:05.105 fused_ordering(1002) 00:18:05.105 fused_ordering(1003) 00:18:05.105 fused_ordering(1004) 00:18:05.105 fused_ordering(1005) 00:18:05.105 fused_ordering(1006) 00:18:05.105 fused_ordering(1007) 00:18:05.105 fused_ordering(1008) 00:18:05.105 fused_ordering(1009) 00:18:05.105 fused_ordering(1010) 00:18:05.105 fused_ordering(1011) 00:18:05.105 fused_ordering(1012) 00:18:05.105 fused_ordering(1013) 00:18:05.105 fused_ordering(1014) 00:18:05.105 fused_ordering(1015) 00:18:05.105 fused_ordering(1016) 00:18:05.105 fused_ordering(1017) 00:18:05.105 fused_ordering(1018) 00:18:05.105 fused_ordering(1019) 00:18:05.105 fused_ordering(1020) 00:18:05.105 fused_ordering(1021) 00:18:05.105 fused_ordering(1022) 00:18:05.105 fused_ordering(1023) 00:18:05.105 10:13:18 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:05.105 10:13:18 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:05.105 10:13:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:05.105 10:13:18 -- nvmf/common.sh@116 -- # sync 00:18:05.105 10:13:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:05.105 10:13:18 -- nvmf/common.sh@119 -- # set +e 00:18:05.105 10:13:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:05.105 10:13:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:05.105 rmmod nvme_tcp 00:18:05.105 rmmod nvme_fabrics 00:18:05.105 rmmod nvme_keyring 00:18:05.105 10:13:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:05.105 10:13:18 -- nvmf/common.sh@123 -- # set -e 00:18:05.105 10:13:18 -- nvmf/common.sh@124 -- # return 0 00:18:05.105 10:13:18 -- nvmf/common.sh@477 -- # '[' -n 271397 ']' 00:18:05.105 10:13:18 -- nvmf/common.sh@478 -- # killprocess 271397 00:18:05.105 10:13:18 -- common/autotest_common.sh@926 -- # '[' -z 271397 ']' 00:18:05.105 10:13:18 -- common/autotest_common.sh@930 -- # kill -0 271397 00:18:05.105 10:13:18 -- common/autotest_common.sh@931 -- # uname 00:18:05.105 10:13:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:05.106 10:13:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 271397 00:18:05.106 10:13:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:05.106 10:13:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:05.106 10:13:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 271397' 00:18:05.106 killing process with pid 271397 00:18:05.106 10:13:18 -- common/autotest_common.sh@945 -- # kill 271397 00:18:05.106 10:13:18 -- common/autotest_common.sh@950 -- # wait 271397 00:18:05.365 10:13:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:05.365 10:13:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:05.365 10:13:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:05.365 10:13:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.365 10:13:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:05.365 10:13:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.365 10:13:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.365 10:13:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.271 10:13:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:07.271 00:18:07.271 real 0m10.326s 00:18:07.271 user 0m5.535s 00:18:07.271 sys 0m5.247s 00:18:07.271 10:13:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.271 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:07.271 ************************************ 00:18:07.271 END TEST nvmf_fused_ordering 00:18:07.271 ************************************ 00:18:07.271 10:13:20 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:18:07.271 10:13:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:07.271 10:13:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:07.271 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:07.271 ************************************ 00:18:07.271 START TEST nvmf_delete_subsystem 00:18:07.271 ************************************ 00:18:07.271 10:13:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:18:07.530 * Looking for test storage... 00:18:07.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.530 10:13:20 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.530 10:13:20 -- nvmf/common.sh@7 -- # uname -s 00:18:07.530 10:13:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.530 10:13:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.530 10:13:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.530 10:13:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.530 10:13:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.530 10:13:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.530 10:13:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.530 10:13:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.530 10:13:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.530 10:13:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.530 10:13:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.530 10:13:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.530 10:13:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.530 10:13:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.530 10:13:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.530 10:13:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.530 10:13:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.530 10:13:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.530 10:13:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.530 10:13:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.530 10:13:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.530 10:13:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.530 10:13:20 -- paths/export.sh@5 -- # export PATH 00:18:07.530 10:13:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.530 10:13:20 -- nvmf/common.sh@46 -- # : 0 00:18:07.530 10:13:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:07.530 10:13:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:07.530 10:13:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:07.530 10:13:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.530 10:13:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.530 10:13:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:07.530 10:13:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:07.530 10:13:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:07.530 10:13:20 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:18:07.530 10:13:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:07.530 10:13:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.530 10:13:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:07.530 10:13:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:07.530 10:13:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:07.530 10:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.530 10:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.530 10:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.530 10:13:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:07.530 10:13:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:07.530 10:13:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:07.530 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:12.802 10:13:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:12.802 10:13:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:12.802 10:13:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:12.802 10:13:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:12.802 10:13:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:12.802 10:13:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:12.802 10:13:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:12.802 10:13:25 -- nvmf/common.sh@294 -- # net_devs=() 00:18:12.802 10:13:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:12.802 10:13:25 -- nvmf/common.sh@295 -- # e810=() 00:18:12.802 10:13:25 -- nvmf/common.sh@295 -- # local -ga e810 00:18:12.802 10:13:25 -- nvmf/common.sh@296 -- # x722=() 00:18:12.802 10:13:25 -- nvmf/common.sh@296 -- # local -ga x722 00:18:12.802 10:13:25 -- nvmf/common.sh@297 -- # mlx=() 00:18:12.802 10:13:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:12.802 10:13:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.802 10:13:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:12.802 10:13:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:12.802 10:13:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:12.802 10:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:12.802 10:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:12.802 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:12.802 10:13:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:12.802 10:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:12.802 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:12.802 10:13:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:12.802 10:13:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:12.802 10:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.802 10:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:12.802 10:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.802 10:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:12.802 Found net devices under 0000:86:00.0: cvl_0_0 00:18:12.802 10:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.802 10:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:12.802 10:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.802 10:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:12.802 10:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.802 10:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:12.802 Found net devices under 0000:86:00.1: cvl_0_1 00:18:12.802 10:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.802 10:13:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:12.802 10:13:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:12.802 10:13:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:12.802 10:13:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:12.802 10:13:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.802 10:13:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.802 10:13:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.802 10:13:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:12.802 10:13:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.802 10:13:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.802 10:13:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:12.802 10:13:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.802 10:13:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.802 10:13:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:12.802 10:13:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:12.802 10:13:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.802 10:13:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.802 10:13:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.802 10:13:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.802 10:13:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:12.802 10:13:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.803 10:13:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.803 10:13:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.803 10:13:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:12.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:18:12.803 00:18:12.803 --- 10.0.0.2 ping statistics --- 00:18:12.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.803 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:18:12.803 10:13:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:18:12.803 00:18:12.803 --- 10.0.0.1 ping statistics --- 00:18:12.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.803 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:18:12.803 10:13:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.803 10:13:25 -- nvmf/common.sh@410 -- # return 0 00:18:12.803 10:13:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:12.803 10:13:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.803 10:13:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:12.803 10:13:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:12.803 10:13:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.803 10:13:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:12.803 10:13:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:12.803 10:13:25 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:18:12.803 10:13:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:12.803 10:13:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:12.803 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:18:12.803 10:13:25 -- nvmf/common.sh@469 -- # nvmfpid=275248 00:18:12.803 10:13:25 -- nvmf/common.sh@470 -- # waitforlisten 275248 00:18:12.803 10:13:25 -- common/autotest_common.sh@819 -- # '[' -z 275248 ']' 00:18:12.803 10:13:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.803 10:13:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:12.803 10:13:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.803 10:13:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:12.803 10:13:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:12.803 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:18:12.803 [2024-04-24 10:13:25.514214] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:12.803 [2024-04-24 10:13:25.514255] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.803 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.803 [2024-04-24 10:13:25.570093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:12.803 [2024-04-24 10:13:25.647305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:12.803 [2024-04-24 10:13:25.647412] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.803 [2024-04-24 10:13:25.647421] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.803 [2024-04-24 10:13:25.647429] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.803 [2024-04-24 10:13:25.647470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.803 [2024-04-24 10:13:25.647472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.062 10:13:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:13.062 10:13:26 -- common/autotest_common.sh@852 -- # return 0 00:18:13.062 10:13:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:13.062 10:13:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:13.062 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:18:13.062 10:13:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.062 10:13:26 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:13.062 10:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:13.062 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:18:13.322 [2024-04-24 10:13:26.348479] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.322 10:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:13.322 10:13:26 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:13.322 10:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:13.322 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:18:13.322 10:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:13.323 10:13:26 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.323 10:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:13.323 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 [2024-04-24 10:13:26.364608] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.323 10:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:13.323 10:13:26 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:13.323 10:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:13.323 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 NULL1 00:18:13.323 10:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:13.323 10:13:26 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:13.323 10:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:13.323 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 Delay0 00:18:13.323 10:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:13.323 10:13:26 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:13.323 10:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:13.323 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:18:13.323 10:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:13.323 10:13:26 -- target/delete_subsystem.sh@28 -- # perf_pid=275492 00:18:13.323 10:13:26 -- target/delete_subsystem.sh@30 -- # sleep 2 00:18:13.323 10:13:26 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:13.323 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.323 [2024-04-24 10:13:26.439217] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:15.227 10:13:28 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.227 10:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.227 10:13:28 -- common/autotest_common.sh@10 -- # set +x 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 [2024-04-24 10:13:28.600617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd8f0 is same with the state(5) to be set 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 starting I/O failed: -6 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 [2024-04-24 10:13:28.600968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3ea8000c00 is same with the state(5) to be set 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Write completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.487 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:15.488 Read completed with error (sct=0, sc=8) 00:18:15.488 Write completed with error (sct=0, sc=8) 00:18:16.425 [2024-04-24 10:13:29.575682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4910 is same with the state(5) to be set 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Read completed with error (sct=0, sc=8) 00:18:16.425 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 [2024-04-24 10:13:29.603258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5e40 is same with the state(5) to be set 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 [2024-04-24 10:13:29.603422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd640 is same with the state(5) to be set 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 [2024-04-24 10:13:29.603583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcdba0 is same with the state(5) to be set 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Write completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 Read completed with error (sct=0, sc=8) 00:18:16.426 [2024-04-24 10:13:29.603674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3ea800c1d0 is same with the state(5) to be set 00:18:16.426 [2024-04-24 10:13:29.604409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc4910 (9): Bad file descriptor 00:18:16.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:18:16.426 10:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:16.426 10:13:29 -- target/delete_subsystem.sh@34 -- # delay=0 00:18:16.426 10:13:29 -- target/delete_subsystem.sh@35 -- # kill -0 275492 00:18:16.426 10:13:29 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:18:16.426 Initializing NVMe Controllers 00:18:16.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:16.426 Controller IO queue size 128, less than required. 00:18:16.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:16.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:16.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:16.426 Initialization complete. Launching workers. 00:18:16.426 ======================================================== 00:18:16.426 Latency(us) 00:18:16.426 Device Information : IOPS MiB/s Average min max 00:18:16.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.58 0.10 943825.24 543.17 1012038.88 00:18:16.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.86 0.08 867561.36 229.38 1012230.44 00:18:16.426 ======================================================== 00:18:16.426 Total : 353.44 0.17 909763.56 229.38 1012230.44 00:18:16.426 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@35 -- # kill -0 275492 00:18:16.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (275492) - No such process 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@45 -- # NOT wait 275492 00:18:16.995 10:13:30 -- common/autotest_common.sh@640 -- # local es=0 00:18:16.995 10:13:30 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 275492 00:18:16.995 10:13:30 -- common/autotest_common.sh@628 -- # local arg=wait 00:18:16.995 10:13:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:16.995 10:13:30 -- common/autotest_common.sh@632 -- # type -t wait 00:18:16.995 10:13:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:16.995 10:13:30 -- common/autotest_common.sh@643 -- # wait 275492 00:18:16.995 10:13:30 -- common/autotest_common.sh@643 -- # es=1 00:18:16.995 10:13:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:16.995 10:13:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:16.995 10:13:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:16.995 10:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:16.995 10:13:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.995 10:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.995 10:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:16.995 10:13:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.995 [2024-04-24 10:13:30.127407] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.995 10:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:16.995 10:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:16.995 10:13:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.995 10:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@54 -- # perf_pid=276015 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@56 -- # delay=0 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@57 -- # kill -0 276015 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:16.995 10:13:30 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:16.995 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.995 [2024-04-24 10:13:30.191650] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:17.563 10:13:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:17.563 10:13:30 -- target/delete_subsystem.sh@57 -- # kill -0 276015 00:18:17.563 10:13:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:18.132 10:13:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:18.132 10:13:31 -- target/delete_subsystem.sh@57 -- # kill -0 276015 00:18:18.132 10:13:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:18.390 10:13:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:18.390 10:13:31 -- target/delete_subsystem.sh@57 -- # kill -0 276015 00:18:18.390 10:13:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:18.958 10:13:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:18.958 10:13:32 -- target/delete_subsystem.sh@57 -- # kill -0 276015 00:18:18.958 10:13:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:19.543 10:13:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:19.543 10:13:32 -- target/delete_subsystem.sh@57 -- # kill -0 276015 00:18:19.543 10:13:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:20.111 10:13:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:20.111 10:13:33 -- target/delete_subsystem.sh@57 -- # kill -0 276015 00:18:20.111 10:13:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:20.371 Initializing NVMe Controllers 00:18:20.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.371 Controller IO queue size 128, less than required. 00:18:20.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:20.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:20.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:20.371 Initialization complete. Launching workers. 00:18:20.371 ======================================================== 00:18:20.371 Latency(us) 00:18:20.371 Device Information : IOPS MiB/s Average min max 00:18:20.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003240.80 1000189.45 1041237.66 00:18:20.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005470.08 1000208.66 1041751.03 00:18:20.371 ======================================================== 00:18:20.371 Total : 256.00 0.12 1004355.44 1000189.45 1041751.03 00:18:20.371 00:18:20.631 10:13:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:20.631 10:13:33 -- target/delete_subsystem.sh@57 -- # kill -0 276015 00:18:20.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (276015) - No such process 00:18:20.631 10:13:33 -- target/delete_subsystem.sh@67 -- # wait 276015 00:18:20.631 10:13:33 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:20.631 10:13:33 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:18:20.631 10:13:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:20.631 10:13:33 -- nvmf/common.sh@116 -- # sync 00:18:20.631 10:13:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:20.631 10:13:33 -- nvmf/common.sh@119 -- # set +e 00:18:20.631 10:13:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:20.631 10:13:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:20.631 rmmod nvme_tcp 00:18:20.631 rmmod nvme_fabrics 00:18:20.631 rmmod nvme_keyring 00:18:20.631 10:13:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:20.631 10:13:33 -- nvmf/common.sh@123 -- # set -e 00:18:20.631 10:13:33 -- nvmf/common.sh@124 -- # return 0 00:18:20.631 10:13:33 -- nvmf/common.sh@477 -- # '[' -n 275248 ']' 00:18:20.631 10:13:33 -- nvmf/common.sh@478 -- # killprocess 275248 00:18:20.631 10:13:33 -- common/autotest_common.sh@926 -- # '[' -z 275248 ']' 00:18:20.631 10:13:33 -- common/autotest_common.sh@930 -- # kill -0 275248 00:18:20.631 10:13:33 -- common/autotest_common.sh@931 -- # uname 00:18:20.631 10:13:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:20.631 10:13:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 275248 00:18:20.631 10:13:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:20.631 10:13:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:20.631 10:13:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 275248' 00:18:20.631 killing process with pid 275248 00:18:20.631 10:13:33 -- common/autotest_common.sh@945 -- # kill 275248 00:18:20.631 10:13:33 -- common/autotest_common.sh@950 -- # wait 275248 00:18:20.891 10:13:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:20.891 10:13:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:20.891 10:13:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:20.891 10:13:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.891 10:13:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:20.891 10:13:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.891 10:13:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.891 10:13:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.798 10:13:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:22.798 00:18:22.798 real 0m15.555s 00:18:22.798 user 0m30.175s 00:18:22.798 sys 0m4.586s 00:18:22.798 10:13:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.798 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:22.798 ************************************ 00:18:22.798 END TEST nvmf_delete_subsystem 00:18:22.798 ************************************ 00:18:23.058 10:13:36 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:18:23.058 10:13:36 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:23.058 10:13:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:23.058 10:13:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:23.058 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:23.058 ************************************ 00:18:23.058 START TEST nvmf_nvme_cli 00:18:23.058 ************************************ 00:18:23.058 10:13:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:23.058 * Looking for test storage... 00:18:23.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.058 10:13:36 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.058 10:13:36 -- nvmf/common.sh@7 -- # uname -s 00:18:23.058 10:13:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.058 10:13:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.058 10:13:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.058 10:13:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.058 10:13:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.058 10:13:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.058 10:13:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.058 10:13:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.058 10:13:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.058 10:13:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.058 10:13:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.058 10:13:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.058 10:13:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.058 10:13:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.058 10:13:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.058 10:13:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.058 10:13:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.058 10:13:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.058 10:13:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.058 10:13:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.058 10:13:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.058 10:13:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.058 10:13:36 -- paths/export.sh@5 -- # export PATH 00:18:23.058 10:13:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.058 10:13:36 -- nvmf/common.sh@46 -- # : 0 00:18:23.058 10:13:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:23.058 10:13:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:23.058 10:13:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:23.058 10:13:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.058 10:13:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.058 10:13:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:23.058 10:13:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:23.058 10:13:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:23.058 10:13:36 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:23.058 10:13:36 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:23.058 10:13:36 -- target/nvme_cli.sh@14 -- # devs=() 00:18:23.058 10:13:36 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:23.058 10:13:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:23.058 10:13:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.058 10:13:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:23.058 10:13:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:23.058 10:13:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:23.058 10:13:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.058 10:13:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.058 10:13:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.058 10:13:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:23.058 10:13:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:23.058 10:13:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:23.059 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:28.348 10:13:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:28.348 10:13:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:28.348 10:13:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:28.348 10:13:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:28.348 10:13:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:28.348 10:13:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:28.348 10:13:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:28.348 10:13:40 -- nvmf/common.sh@294 -- # net_devs=() 00:18:28.348 10:13:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:28.348 10:13:40 -- nvmf/common.sh@295 -- # e810=() 00:18:28.348 10:13:40 -- nvmf/common.sh@295 -- # local -ga e810 00:18:28.348 10:13:40 -- nvmf/common.sh@296 -- # x722=() 00:18:28.348 10:13:40 -- nvmf/common.sh@296 -- # local -ga x722 00:18:28.348 10:13:40 -- nvmf/common.sh@297 -- # mlx=() 00:18:28.348 10:13:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:28.348 10:13:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.348 10:13:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:28.348 10:13:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:28.348 10:13:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:28.348 10:13:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:28.348 10:13:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:28.348 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:28.348 10:13:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:28.348 10:13:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:28.348 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:28.348 10:13:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:28.348 10:13:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:28.348 10:13:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.348 10:13:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:28.348 10:13:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.348 10:13:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:28.348 Found net devices under 0000:86:00.0: cvl_0_0 00:18:28.348 10:13:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.348 10:13:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:28.348 10:13:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.348 10:13:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:28.348 10:13:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.348 10:13:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:28.348 Found net devices under 0000:86:00.1: cvl_0_1 00:18:28.348 10:13:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.348 10:13:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:28.348 10:13:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:28.348 10:13:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:28.348 10:13:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:28.348 10:13:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.348 10:13:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.348 10:13:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.348 10:13:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:28.348 10:13:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.348 10:13:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.348 10:13:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:28.348 10:13:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.348 10:13:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.348 10:13:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:28.348 10:13:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:28.348 10:13:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.348 10:13:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.348 10:13:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.348 10:13:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.348 10:13:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:28.348 10:13:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.348 10:13:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.348 10:13:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.348 10:13:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:28.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:18:28.348 00:18:28.348 --- 10.0.0.2 ping statistics --- 00:18:28.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.348 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:18:28.348 10:13:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:18:28.348 00:18:28.348 --- 10.0.0.1 ping statistics --- 00:18:28.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.348 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:18:28.348 10:13:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.348 10:13:41 -- nvmf/common.sh@410 -- # return 0 00:18:28.348 10:13:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:28.348 10:13:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.348 10:13:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:28.348 10:13:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:28.348 10:13:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.348 10:13:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:28.348 10:13:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:28.348 10:13:41 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:28.348 10:13:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:28.348 10:13:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:28.348 10:13:41 -- common/autotest_common.sh@10 -- # set +x 00:18:28.348 10:13:41 -- nvmf/common.sh@469 -- # nvmfpid=280000 00:18:28.348 10:13:41 -- nvmf/common.sh@470 -- # waitforlisten 280000 00:18:28.349 10:13:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:28.349 10:13:41 -- common/autotest_common.sh@819 -- # '[' -z 280000 ']' 00:18:28.349 10:13:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.349 10:13:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:28.349 10:13:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.349 10:13:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:28.349 10:13:41 -- common/autotest_common.sh@10 -- # set +x 00:18:28.349 [2024-04-24 10:13:41.217293] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:28.349 [2024-04-24 10:13:41.217343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.349 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.349 [2024-04-24 10:13:41.276008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.349 [2024-04-24 10:13:41.361086] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:28.349 [2024-04-24 10:13:41.361194] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.349 [2024-04-24 10:13:41.361203] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.349 [2024-04-24 10:13:41.361210] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.349 [2024-04-24 10:13:41.361250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.349 [2024-04-24 10:13:41.361344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.349 [2024-04-24 10:13:41.361430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.349 [2024-04-24 10:13:41.361431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.916 10:13:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:28.916 10:13:42 -- common/autotest_common.sh@852 -- # return 0 00:18:28.916 10:13:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:28.916 10:13:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:28.916 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.916 10:13:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.916 10:13:42 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.916 10:13:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.916 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.916 [2024-04-24 10:13:42.069379] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.916 10:13:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.916 10:13:42 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:28.916 10:13:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.916 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.916 Malloc0 00:18:28.916 10:13:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.916 10:13:42 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:28.916 10:13:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.916 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.916 Malloc1 00:18:28.916 10:13:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.916 10:13:42 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:28.916 10:13:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.916 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.916 10:13:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.916 10:13:42 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.916 10:13:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.916 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.916 10:13:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.916 10:13:42 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:28.916 10:13:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.916 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.916 10:13:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.916 10:13:42 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.916 10:13:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.916 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.916 [2024-04-24 10:13:42.151058] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.916 10:13:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.916 10:13:42 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:28.916 10:13:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.916 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.916 10:13:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.917 10:13:42 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:29.175 00:18:29.175 Discovery Log Number of Records 2, Generation counter 2 00:18:29.175 =====Discovery Log Entry 0====== 00:18:29.175 trtype: tcp 00:18:29.175 adrfam: ipv4 00:18:29.175 subtype: current discovery subsystem 00:18:29.175 treq: not required 00:18:29.175 portid: 0 00:18:29.175 trsvcid: 4420 00:18:29.175 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:29.175 traddr: 10.0.0.2 00:18:29.175 eflags: explicit discovery connections, duplicate discovery information 00:18:29.175 sectype: none 00:18:29.175 =====Discovery Log Entry 1====== 00:18:29.175 trtype: tcp 00:18:29.175 adrfam: ipv4 00:18:29.175 subtype: nvme subsystem 00:18:29.175 treq: not required 00:18:29.175 portid: 0 00:18:29.175 trsvcid: 4420 00:18:29.175 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:29.175 traddr: 10.0.0.2 00:18:29.175 eflags: none 00:18:29.175 sectype: none 00:18:29.175 10:13:42 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:29.175 10:13:42 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:29.175 10:13:42 -- nvmf/common.sh@510 -- # local dev _ 00:18:29.175 10:13:42 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:29.175 10:13:42 -- nvmf/common.sh@509 -- # nvme list 00:18:29.175 10:13:42 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:18:29.175 10:13:42 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:29.175 10:13:42 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:18:29.175 10:13:42 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:29.175 10:13:42 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:29.175 10:13:42 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:30.109 10:13:43 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:30.109 10:13:43 -- common/autotest_common.sh@1177 -- # local i=0 00:18:30.109 10:13:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.109 10:13:43 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:18:30.109 10:13:43 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:18:30.109 10:13:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:32.640 10:13:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:32.640 10:13:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:32.640 10:13:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:32.640 10:13:45 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:18:32.640 10:13:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.640 10:13:45 -- common/autotest_common.sh@1187 -- # return 0 00:18:32.640 10:13:45 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:32.640 10:13:45 -- nvmf/common.sh@510 -- # local dev _ 00:18:32.640 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.640 10:13:45 -- nvmf/common.sh@509 -- # nvme list 00:18:32.640 10:13:45 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:18:32.640 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.640 10:13:45 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:18:32.640 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.640 10:13:45 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:32.640 10:13:45 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:18:32.640 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.640 10:13:45 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:32.640 10:13:45 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:18:32.640 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.640 10:13:45 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:32.640 /dev/nvme0n1 ]] 00:18:32.640 10:13:45 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:32.640 10:13:45 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:32.640 10:13:45 -- nvmf/common.sh@510 -- # local dev _ 00:18:32.640 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.641 10:13:45 -- nvmf/common.sh@509 -- # nvme list 00:18:32.641 10:13:45 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:18:32.641 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.641 10:13:45 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:18:32.641 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.641 10:13:45 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:32.641 10:13:45 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:18:32.641 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.641 10:13:45 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:32.641 10:13:45 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:18:32.641 10:13:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:18:32.641 10:13:45 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:32.641 10:13:45 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:32.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.641 10:13:45 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:32.641 10:13:45 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.641 10:13:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.641 10:13:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:32.641 10:13:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.641 10:13:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:32.641 10:13:45 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.641 10:13:45 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:32.641 10:13:45 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.641 10:13:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.641 10:13:45 -- common/autotest_common.sh@10 -- # set +x 00:18:32.641 10:13:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.641 10:13:45 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:32.641 10:13:45 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:32.641 10:13:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:32.641 10:13:45 -- nvmf/common.sh@116 -- # sync 00:18:32.641 10:13:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:32.641 10:13:45 -- nvmf/common.sh@119 -- # set +e 00:18:32.641 10:13:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:32.641 10:13:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:32.641 rmmod nvme_tcp 00:18:32.641 rmmod nvme_fabrics 00:18:32.641 rmmod nvme_keyring 00:18:32.641 10:13:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:32.641 10:13:45 -- nvmf/common.sh@123 -- # set -e 00:18:32.641 10:13:45 -- nvmf/common.sh@124 -- # return 0 00:18:32.641 10:13:45 -- nvmf/common.sh@477 -- # '[' -n 280000 ']' 00:18:32.641 10:13:45 -- nvmf/common.sh@478 -- # killprocess 280000 00:18:32.641 10:13:45 -- common/autotest_common.sh@926 -- # '[' -z 280000 ']' 00:18:32.641 10:13:45 -- common/autotest_common.sh@930 -- # kill -0 280000 00:18:32.641 10:13:45 -- common/autotest_common.sh@931 -- # uname 00:18:32.641 10:13:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:32.641 10:13:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 280000 00:18:32.641 10:13:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:32.641 10:13:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:32.641 10:13:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 280000' 00:18:32.641 killing process with pid 280000 00:18:32.641 10:13:45 -- common/autotest_common.sh@945 -- # kill 280000 00:18:32.641 10:13:45 -- common/autotest_common.sh@950 -- # wait 280000 00:18:32.900 10:13:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:32.900 10:13:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:32.900 10:13:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:32.900 10:13:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:32.900 10:13:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:32.900 10:13:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.900 10:13:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.900 10:13:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.808 10:13:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:34.808 00:18:34.808 real 0m11.893s 00:18:34.808 user 0m19.436s 00:18:34.808 sys 0m4.215s 00:18:34.808 10:13:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.808 10:13:47 -- common/autotest_common.sh@10 -- # set +x 00:18:34.808 ************************************ 00:18:34.808 END TEST nvmf_nvme_cli 00:18:34.808 ************************************ 00:18:34.808 10:13:48 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:18:34.808 10:13:48 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:34.808 10:13:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:34.808 10:13:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:34.808 10:13:48 -- common/autotest_common.sh@10 -- # set +x 00:18:34.808 ************************************ 00:18:34.808 START TEST nvmf_host_management 00:18:34.808 ************************************ 00:18:34.808 10:13:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:35.067 * Looking for test storage... 00:18:35.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.067 10:13:48 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.067 10:13:48 -- nvmf/common.sh@7 -- # uname -s 00:18:35.067 10:13:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.067 10:13:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.067 10:13:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.067 10:13:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.067 10:13:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.067 10:13:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.067 10:13:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.067 10:13:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.067 10:13:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.067 10:13:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.067 10:13:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.067 10:13:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.067 10:13:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.067 10:13:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.067 10:13:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.067 10:13:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.067 10:13:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.067 10:13:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.067 10:13:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.067 10:13:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.067 10:13:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.067 10:13:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.067 10:13:48 -- paths/export.sh@5 -- # export PATH 00:18:35.067 10:13:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.067 10:13:48 -- nvmf/common.sh@46 -- # : 0 00:18:35.067 10:13:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:35.067 10:13:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:35.067 10:13:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:35.067 10:13:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.067 10:13:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.067 10:13:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:35.067 10:13:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:35.067 10:13:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:35.067 10:13:48 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:35.067 10:13:48 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:35.067 10:13:48 -- target/host_management.sh@104 -- # nvmftestinit 00:18:35.067 10:13:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:35.067 10:13:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.067 10:13:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:35.067 10:13:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:35.067 10:13:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:35.067 10:13:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.067 10:13:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.067 10:13:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.067 10:13:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:35.067 10:13:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:35.067 10:13:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:35.067 10:13:48 -- common/autotest_common.sh@10 -- # set +x 00:18:40.350 10:13:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:40.350 10:13:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:40.350 10:13:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:40.350 10:13:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:40.350 10:13:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:40.350 10:13:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:40.350 10:13:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:40.350 10:13:53 -- nvmf/common.sh@294 -- # net_devs=() 00:18:40.350 10:13:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:40.350 10:13:53 -- nvmf/common.sh@295 -- # e810=() 00:18:40.350 10:13:53 -- nvmf/common.sh@295 -- # local -ga e810 00:18:40.350 10:13:53 -- nvmf/common.sh@296 -- # x722=() 00:18:40.350 10:13:53 -- nvmf/common.sh@296 -- # local -ga x722 00:18:40.350 10:13:53 -- nvmf/common.sh@297 -- # mlx=() 00:18:40.350 10:13:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:40.350 10:13:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.350 10:13:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:40.350 10:13:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:40.350 10:13:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:40.350 10:13:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:40.350 10:13:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:40.350 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:40.350 10:13:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:40.350 10:13:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:40.350 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:40.350 10:13:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:40.350 10:13:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:40.350 10:13:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:40.350 10:13:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.350 10:13:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:40.350 10:13:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.350 10:13:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:40.350 Found net devices under 0000:86:00.0: cvl_0_0 00:18:40.350 10:13:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.350 10:13:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:40.350 10:13:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.350 10:13:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:40.351 10:13:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.351 10:13:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:40.351 Found net devices under 0000:86:00.1: cvl_0_1 00:18:40.351 10:13:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.351 10:13:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:40.351 10:13:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:40.351 10:13:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:40.351 10:13:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:40.351 10:13:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:40.351 10:13:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.351 10:13:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.351 10:13:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.351 10:13:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:40.351 10:13:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.351 10:13:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.351 10:13:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:40.351 10:13:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.351 10:13:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.351 10:13:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:40.351 10:13:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:40.351 10:13:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.351 10:13:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.351 10:13:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.351 10:13:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.351 10:13:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:40.351 10:13:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.351 10:13:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.351 10:13:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.351 10:13:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:40.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:18:40.351 00:18:40.351 --- 10.0.0.2 ping statistics --- 00:18:40.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.351 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:18:40.351 10:13:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:18:40.351 00:18:40.351 --- 10.0.0.1 ping statistics --- 00:18:40.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.351 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:18:40.351 10:13:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.351 10:13:53 -- nvmf/common.sh@410 -- # return 0 00:18:40.351 10:13:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:40.351 10:13:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.351 10:13:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:40.351 10:13:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:40.351 10:13:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.351 10:13:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:40.351 10:13:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:40.351 10:13:53 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:18:40.351 10:13:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:40.351 10:13:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:40.351 10:13:53 -- common/autotest_common.sh@10 -- # set +x 00:18:40.351 ************************************ 00:18:40.351 START TEST nvmf_host_management 00:18:40.351 ************************************ 00:18:40.351 10:13:53 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:18:40.351 10:13:53 -- target/host_management.sh@69 -- # starttarget 00:18:40.351 10:13:53 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:18:40.351 10:13:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:40.351 10:13:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:40.351 10:13:53 -- common/autotest_common.sh@10 -- # set +x 00:18:40.351 10:13:53 -- nvmf/common.sh@469 -- # nvmfpid=284275 00:18:40.351 10:13:53 -- nvmf/common.sh@470 -- # waitforlisten 284275 00:18:40.351 10:13:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:40.351 10:13:53 -- common/autotest_common.sh@819 -- # '[' -z 284275 ']' 00:18:40.351 10:13:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.351 10:13:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:40.351 10:13:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.351 10:13:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:40.351 10:13:53 -- common/autotest_common.sh@10 -- # set +x 00:18:40.351 [2024-04-24 10:13:53.332414] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:40.351 [2024-04-24 10:13:53.332457] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.351 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.351 [2024-04-24 10:13:53.389422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:40.351 [2024-04-24 10:13:53.467428] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:40.351 [2024-04-24 10:13:53.467533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.351 [2024-04-24 10:13:53.467543] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.351 [2024-04-24 10:13:53.467550] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.351 [2024-04-24 10:13:53.467644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.351 [2024-04-24 10:13:53.467730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.351 [2024-04-24 10:13:53.467835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.351 [2024-04-24 10:13:53.467837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:40.917 10:13:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:40.917 10:13:54 -- common/autotest_common.sh@852 -- # return 0 00:18:40.917 10:13:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:40.917 10:13:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:40.917 10:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:40.917 10:13:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.917 10:13:54 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:40.917 10:13:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.917 10:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:40.918 [2024-04-24 10:13:54.185417] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.918 10:13:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.918 10:13:54 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:18:40.918 10:13:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:40.918 10:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 10:13:54 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:41.176 10:13:54 -- target/host_management.sh@23 -- # cat 00:18:41.176 10:13:54 -- target/host_management.sh@30 -- # rpc_cmd 00:18:41.176 10:13:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:41.176 10:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 Malloc0 00:18:41.176 [2024-04-24 10:13:54.245192] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.176 10:13:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:41.176 10:13:54 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:18:41.176 10:13:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:41.176 10:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 10:13:54 -- target/host_management.sh@73 -- # perfpid=284545 00:18:41.176 10:13:54 -- target/host_management.sh@74 -- # waitforlisten 284545 /var/tmp/bdevperf.sock 00:18:41.176 10:13:54 -- common/autotest_common.sh@819 -- # '[' -z 284545 ']' 00:18:41.176 10:13:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.176 10:13:54 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:41.176 10:13:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:41.176 10:13:54 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:18:41.176 10:13:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.176 10:13:54 -- nvmf/common.sh@520 -- # config=() 00:18:41.176 10:13:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:41.176 10:13:54 -- nvmf/common.sh@520 -- # local subsystem config 00:18:41.176 10:13:54 -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 10:13:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:41.176 10:13:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:41.176 { 00:18:41.176 "params": { 00:18:41.176 "name": "Nvme$subsystem", 00:18:41.176 "trtype": "$TEST_TRANSPORT", 00:18:41.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.176 "adrfam": "ipv4", 00:18:41.177 "trsvcid": "$NVMF_PORT", 00:18:41.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.177 "hdgst": ${hdgst:-false}, 00:18:41.177 "ddgst": ${ddgst:-false} 00:18:41.177 }, 00:18:41.177 "method": "bdev_nvme_attach_controller" 00:18:41.177 } 00:18:41.177 EOF 00:18:41.177 )") 00:18:41.177 10:13:54 -- nvmf/common.sh@542 -- # cat 00:18:41.177 10:13:54 -- nvmf/common.sh@544 -- # jq . 00:18:41.177 10:13:54 -- nvmf/common.sh@545 -- # IFS=, 00:18:41.177 10:13:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:41.177 "params": { 00:18:41.177 "name": "Nvme0", 00:18:41.177 "trtype": "tcp", 00:18:41.177 "traddr": "10.0.0.2", 00:18:41.177 "adrfam": "ipv4", 00:18:41.177 "trsvcid": "4420", 00:18:41.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:41.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:41.177 "hdgst": false, 00:18:41.177 "ddgst": false 00:18:41.177 }, 00:18:41.177 "method": "bdev_nvme_attach_controller" 00:18:41.177 }' 00:18:41.177 [2024-04-24 10:13:54.333337] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:41.177 [2024-04-24 10:13:54.333383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284545 ] 00:18:41.177 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.177 [2024-04-24 10:13:54.389367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.435 [2024-04-24 10:13:54.468346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.693 Running I/O for 10 seconds... 00:18:41.973 10:13:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:41.973 10:13:55 -- common/autotest_common.sh@852 -- # return 0 00:18:41.973 10:13:55 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:41.973 10:13:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:41.973 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:18:41.973 10:13:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:41.973 10:13:55 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.973 10:13:55 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:18:41.973 10:13:55 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:41.973 10:13:55 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:18:41.973 10:13:55 -- target/host_management.sh@52 -- # local ret=1 00:18:41.973 10:13:55 -- target/host_management.sh@53 -- # local i 00:18:41.973 10:13:55 -- target/host_management.sh@54 -- # (( i = 10 )) 00:18:41.973 10:13:55 -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:41.973 10:13:55 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:41.973 10:13:55 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:41.973 10:13:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:41.973 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:18:41.973 10:13:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:41.973 10:13:55 -- target/host_management.sh@55 -- # read_io_count=1239 00:18:41.973 10:13:55 -- target/host_management.sh@58 -- # '[' 1239 -ge 100 ']' 00:18:41.973 10:13:55 -- target/host_management.sh@59 -- # ret=0 00:18:41.973 10:13:55 -- target/host_management.sh@60 -- # break 00:18:41.973 10:13:55 -- target/host_management.sh@64 -- # return 0 00:18:41.973 10:13:55 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:41.973 10:13:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:41.973 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:18:41.973 [2024-04-24 10:13:55.212531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.973 [2024-04-24 10:13:55.212751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.212868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5d6f0 is same with the state(5) to be set 00:18:41.974 [2024-04-24 10:13:55.213105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.974 [2024-04-24 10:13:55.213507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.974 [2024-04-24 10:13:55.213515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.213991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.213998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.214006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.214015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.214023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.214030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.214039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.214047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.214055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.214061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.214074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.975 [2024-04-24 10:13:55.214081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.975 [2024-04-24 10:13:55.214089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.976 [2024-04-24 10:13:55.214096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.976 [2024-04-24 10:13:55.214104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.976 [2024-04-24 10:13:55.214112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.976 [2024-04-24 10:13:55.214120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.976 [2024-04-24 10:13:55.214127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.976 [2024-04-24 10:13:55.214135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138d170 is same with the state(5) to be set 00:18:41.976 [2024-04-24 10:13:55.214187] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x138d170 was disconnected and freed. reset controller. 00:18:41.976 [2024-04-24 10:13:55.215098] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:41.976 task offset: 52480 on job bdev=Nvme0n1 fails 00:18:41.976 00:18:41.976 Latency(us) 00:18:41.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.976 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:41.976 Job: Nvme0n1 ended in about 0.43 seconds with error 00:18:41.976 Verification LBA range: start 0x0 length 0x400 00:18:41.976 Nvme0n1 : 0.43 3275.27 204.70 147.83 0.00 18414.10 1688.26 29063.79 00:18:41.976 =================================================================================================================== 00:18:41.976 Total : 3275.27 204.70 147.83 0.00 18414.10 1688.26 29063.79 00:18:41.976 [2024-04-24 10:13:55.216684] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:41.976 [2024-04-24 10:13:55.216698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138f900 (9): Bad file descriptor 00:18:41.976 10:13:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:41.976 10:13:55 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:41.976 [2024-04-24 10:13:55.218101] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:18:41.976 10:13:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:41.976 [2024-04-24 10:13:55.218193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:41.976 [2024-04-24 10:13:55.218216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.976 [2024-04-24 10:13:55.218232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:18:41.976 [2024-04-24 10:13:55.218240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:18:41.976 [2024-04-24 10:13:55.218247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:18:41.976 [2024-04-24 10:13:55.218254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x138f900 00:18:41.976 [2024-04-24 10:13:55.218274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138f900 (9): Bad file descriptor 00:18:41.976 [2024-04-24 10:13:55.218285] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:41.976 [2024-04-24 10:13:55.218292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:41.976 [2024-04-24 10:13:55.218299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:41.976 [2024-04-24 10:13:55.218311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:41.976 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:18:41.976 10:13:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:41.976 10:13:55 -- target/host_management.sh@87 -- # sleep 1 00:18:43.061 10:13:56 -- target/host_management.sh@91 -- # kill -9 284545 00:18:43.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (284545) - No such process 00:18:43.061 10:13:56 -- target/host_management.sh@91 -- # true 00:18:43.061 10:13:56 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:43.061 10:13:56 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:43.061 10:13:56 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:43.061 10:13:56 -- nvmf/common.sh@520 -- # config=() 00:18:43.061 10:13:56 -- nvmf/common.sh@520 -- # local subsystem config 00:18:43.061 10:13:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:43.061 10:13:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:43.061 { 00:18:43.061 "params": { 00:18:43.061 "name": "Nvme$subsystem", 00:18:43.061 "trtype": "$TEST_TRANSPORT", 00:18:43.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.061 "adrfam": "ipv4", 00:18:43.061 "trsvcid": "$NVMF_PORT", 00:18:43.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.061 "hdgst": ${hdgst:-false}, 00:18:43.061 "ddgst": ${ddgst:-false} 00:18:43.061 }, 00:18:43.061 "method": "bdev_nvme_attach_controller" 00:18:43.061 } 00:18:43.061 EOF 00:18:43.061 )") 00:18:43.061 10:13:56 -- nvmf/common.sh@542 -- # cat 00:18:43.061 10:13:56 -- nvmf/common.sh@544 -- # jq . 00:18:43.061 10:13:56 -- nvmf/common.sh@545 -- # IFS=, 00:18:43.061 10:13:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:43.061 "params": { 00:18:43.061 "name": "Nvme0", 00:18:43.061 "trtype": "tcp", 00:18:43.061 "traddr": "10.0.0.2", 00:18:43.061 "adrfam": "ipv4", 00:18:43.061 "trsvcid": "4420", 00:18:43.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:43.061 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:43.061 "hdgst": false, 00:18:43.061 "ddgst": false 00:18:43.061 }, 00:18:43.061 "method": "bdev_nvme_attach_controller" 00:18:43.061 }' 00:18:43.061 [2024-04-24 10:13:56.279237] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:43.061 [2024-04-24 10:13:56.279284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284805 ] 00:18:43.061 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.061 [2024-04-24 10:13:56.333397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.319 [2024-04-24 10:13:56.402896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.319 Running I/O for 1 seconds... 00:18:44.694 00:18:44.694 Latency(us) 00:18:44.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.694 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:44.694 Verification LBA range: start 0x0 length 0x400 00:18:44.694 Nvme0n1 : 1.01 3011.63 188.23 0.00 0.00 20986.94 2692.67 30317.52 00:18:44.694 =================================================================================================================== 00:18:44.694 Total : 3011.63 188.23 0.00 0.00 20986.94 2692.67 30317.52 00:18:44.694 10:13:57 -- target/host_management.sh@101 -- # stoptarget 00:18:44.694 10:13:57 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:44.694 10:13:57 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:44.694 10:13:57 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:44.694 10:13:57 -- target/host_management.sh@40 -- # nvmftestfini 00:18:44.694 10:13:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:44.694 10:13:57 -- nvmf/common.sh@116 -- # sync 00:18:44.694 10:13:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:44.694 10:13:57 -- nvmf/common.sh@119 -- # set +e 00:18:44.694 10:13:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:44.694 10:13:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:44.694 rmmod nvme_tcp 00:18:44.694 rmmod nvme_fabrics 00:18:44.694 rmmod nvme_keyring 00:18:44.694 10:13:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:44.694 10:13:57 -- nvmf/common.sh@123 -- # set -e 00:18:44.694 10:13:57 -- nvmf/common.sh@124 -- # return 0 00:18:44.694 10:13:57 -- nvmf/common.sh@477 -- # '[' -n 284275 ']' 00:18:44.694 10:13:57 -- nvmf/common.sh@478 -- # killprocess 284275 00:18:44.694 10:13:57 -- common/autotest_common.sh@926 -- # '[' -z 284275 ']' 00:18:44.694 10:13:57 -- common/autotest_common.sh@930 -- # kill -0 284275 00:18:44.694 10:13:57 -- common/autotest_common.sh@931 -- # uname 00:18:44.694 10:13:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:44.694 10:13:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 284275 00:18:44.694 10:13:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:44.694 10:13:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:44.694 10:13:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 284275' 00:18:44.694 killing process with pid 284275 00:18:44.694 10:13:57 -- common/autotest_common.sh@945 -- # kill 284275 00:18:44.694 10:13:57 -- common/autotest_common.sh@950 -- # wait 284275 00:18:44.953 [2024-04-24 10:13:58.145920] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:44.953 10:13:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:44.953 10:13:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:44.953 10:13:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:44.953 10:13:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.953 10:13:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:44.953 10:13:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.953 10:13:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.953 10:13:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.486 10:14:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:47.486 00:18:47.486 real 0m6.952s 00:18:47.486 user 0m21.105s 00:18:47.486 sys 0m1.188s 00:18:47.486 10:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.486 10:14:00 -- common/autotest_common.sh@10 -- # set +x 00:18:47.486 ************************************ 00:18:47.486 END TEST nvmf_host_management 00:18:47.486 ************************************ 00:18:47.486 10:14:00 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:47.486 00:18:47.486 real 0m12.251s 00:18:47.486 user 0m22.536s 00:18:47.486 sys 0m5.041s 00:18:47.486 10:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.486 10:14:00 -- common/autotest_common.sh@10 -- # set +x 00:18:47.486 ************************************ 00:18:47.486 END TEST nvmf_host_management 00:18:47.486 ************************************ 00:18:47.486 10:14:00 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:47.486 10:14:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:47.486 10:14:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:47.486 10:14:00 -- common/autotest_common.sh@10 -- # set +x 00:18:47.486 ************************************ 00:18:47.486 START TEST nvmf_lvol 00:18:47.486 ************************************ 00:18:47.486 10:14:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:47.486 * Looking for test storage... 00:18:47.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.486 10:14:00 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.486 10:14:00 -- nvmf/common.sh@7 -- # uname -s 00:18:47.486 10:14:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.486 10:14:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.486 10:14:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.486 10:14:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.486 10:14:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.486 10:14:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.486 10:14:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.486 10:14:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.486 10:14:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.486 10:14:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.486 10:14:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.486 10:14:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.486 10:14:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.486 10:14:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.486 10:14:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.486 10:14:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.486 10:14:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.486 10:14:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.486 10:14:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.486 10:14:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.486 10:14:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.486 10:14:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.486 10:14:00 -- paths/export.sh@5 -- # export PATH 00:18:47.486 10:14:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.486 10:14:00 -- nvmf/common.sh@46 -- # : 0 00:18:47.486 10:14:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:47.486 10:14:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:47.486 10:14:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:47.486 10:14:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.486 10:14:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.486 10:14:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:47.486 10:14:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:47.486 10:14:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:47.486 10:14:00 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.486 10:14:00 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.486 10:14:00 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:47.486 10:14:00 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:47.486 10:14:00 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:47.486 10:14:00 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:47.486 10:14:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:47.486 10:14:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.486 10:14:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:47.486 10:14:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:47.486 10:14:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:47.486 10:14:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.487 10:14:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.487 10:14:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.487 10:14:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:47.487 10:14:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:47.487 10:14:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:47.487 10:14:00 -- common/autotest_common.sh@10 -- # set +x 00:18:52.756 10:14:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:52.756 10:14:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:52.756 10:14:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:52.756 10:14:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:52.756 10:14:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:52.756 10:14:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:52.756 10:14:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:52.756 10:14:05 -- nvmf/common.sh@294 -- # net_devs=() 00:18:52.756 10:14:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:52.756 10:14:05 -- nvmf/common.sh@295 -- # e810=() 00:18:52.756 10:14:05 -- nvmf/common.sh@295 -- # local -ga e810 00:18:52.756 10:14:05 -- nvmf/common.sh@296 -- # x722=() 00:18:52.756 10:14:05 -- nvmf/common.sh@296 -- # local -ga x722 00:18:52.756 10:14:05 -- nvmf/common.sh@297 -- # mlx=() 00:18:52.756 10:14:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:52.756 10:14:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.756 10:14:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:52.756 10:14:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:52.756 10:14:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:52.756 10:14:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:52.756 10:14:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:52.756 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:52.756 10:14:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:52.756 10:14:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:52.756 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:52.756 10:14:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.756 10:14:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.757 10:14:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:52.757 10:14:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:52.757 10:14:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:52.757 10:14:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:52.757 10:14:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:52.757 10:14:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.757 10:14:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:52.757 10:14:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.757 10:14:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:52.757 Found net devices under 0000:86:00.0: cvl_0_0 00:18:52.757 10:14:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.757 10:14:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:52.757 10:14:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.757 10:14:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:52.757 10:14:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.757 10:14:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:52.757 Found net devices under 0000:86:00.1: cvl_0_1 00:18:52.757 10:14:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.757 10:14:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:52.757 10:14:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:52.757 10:14:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:52.757 10:14:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:52.757 10:14:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:52.757 10:14:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.757 10:14:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.757 10:14:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.757 10:14:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:52.757 10:14:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.757 10:14:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.757 10:14:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:52.757 10:14:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.757 10:14:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.757 10:14:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:52.757 10:14:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:52.757 10:14:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.757 10:14:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.757 10:14:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.757 10:14:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.757 10:14:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:52.757 10:14:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.757 10:14:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.757 10:14:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.757 10:14:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:52.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:18:52.757 00:18:52.757 --- 10.0.0.2 ping statistics --- 00:18:52.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.757 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:18:52.757 10:14:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:18:52.757 00:18:52.757 --- 10.0.0.1 ping statistics --- 00:18:52.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.757 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:18:52.757 10:14:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.757 10:14:05 -- nvmf/common.sh@410 -- # return 0 00:18:52.757 10:14:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:52.757 10:14:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.757 10:14:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:52.757 10:14:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:52.757 10:14:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.757 10:14:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:52.757 10:14:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:52.757 10:14:05 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:52.757 10:14:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:52.757 10:14:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:52.757 10:14:05 -- common/autotest_common.sh@10 -- # set +x 00:18:52.757 10:14:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:52.757 10:14:05 -- nvmf/common.sh@469 -- # nvmfpid=288592 00:18:52.757 10:14:05 -- nvmf/common.sh@470 -- # waitforlisten 288592 00:18:52.757 10:14:05 -- common/autotest_common.sh@819 -- # '[' -z 288592 ']' 00:18:52.757 10:14:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.757 10:14:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:52.757 10:14:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.757 10:14:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:52.757 10:14:05 -- common/autotest_common.sh@10 -- # set +x 00:18:52.757 [2024-04-24 10:14:05.675934] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:52.757 [2024-04-24 10:14:05.675981] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.757 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.757 [2024-04-24 10:14:05.734053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:52.757 [2024-04-24 10:14:05.811743] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:52.757 [2024-04-24 10:14:05.811854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.757 [2024-04-24 10:14:05.811862] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.757 [2024-04-24 10:14:05.811869] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.757 [2024-04-24 10:14:05.811905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.757 [2024-04-24 10:14:05.811998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.757 [2024-04-24 10:14:05.812001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.325 10:14:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:53.325 10:14:06 -- common/autotest_common.sh@852 -- # return 0 00:18:53.325 10:14:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:53.325 10:14:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:53.325 10:14:06 -- common/autotest_common.sh@10 -- # set +x 00:18:53.325 10:14:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.325 10:14:06 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:53.584 [2024-04-24 10:14:06.692827] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.584 10:14:06 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.843 10:14:06 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:53.843 10:14:06 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.843 10:14:07 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:53.843 10:14:07 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:54.102 10:14:07 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:54.360 10:14:07 -- target/nvmf_lvol.sh@29 -- # lvs=5d69d122-0de0-42e7-8751-8ec1098b869c 00:18:54.361 10:14:07 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5d69d122-0de0-42e7-8751-8ec1098b869c lvol 20 00:18:54.361 10:14:07 -- target/nvmf_lvol.sh@32 -- # lvol=91960468-ee78-4737-8907-223e5b429d62 00:18:54.361 10:14:07 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:54.619 10:14:07 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 91960468-ee78-4737-8907-223e5b429d62 00:18:54.878 10:14:07 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:54.878 [2024-04-24 10:14:08.132432] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.136 10:14:08 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:55.136 10:14:08 -- target/nvmf_lvol.sh@42 -- # perf_pid=289097 00:18:55.136 10:14:08 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:55.136 10:14:08 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:55.136 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.071 10:14:09 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 91960468-ee78-4737-8907-223e5b429d62 MY_SNAPSHOT 00:18:56.329 10:14:09 -- target/nvmf_lvol.sh@47 -- # snapshot=ce65a08a-8a86-421a-beb8-65f08afa6407 00:18:56.330 10:14:09 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 91960468-ee78-4737-8907-223e5b429d62 30 00:18:56.589 10:14:09 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ce65a08a-8a86-421a-beb8-65f08afa6407 MY_CLONE 00:18:56.847 10:14:09 -- target/nvmf_lvol.sh@49 -- # clone=ce5e136f-dca2-4b0a-83a5-647f82794513 00:18:56.847 10:14:09 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ce5e136f-dca2-4b0a-83a5-647f82794513 00:18:57.105 10:14:10 -- target/nvmf_lvol.sh@53 -- # wait 289097 00:19:07.073 Initializing NVMe Controllers 00:19:07.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:07.073 Controller IO queue size 128, less than required. 00:19:07.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:07.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:07.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:07.073 Initialization complete. Launching workers. 00:19:07.073 ======================================================== 00:19:07.073 Latency(us) 00:19:07.073 Device Information : IOPS MiB/s Average min max 00:19:07.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12023.50 46.97 10647.71 2131.97 52107.17 00:19:07.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12126.10 47.37 10559.13 1282.66 59795.17 00:19:07.073 ======================================================== 00:19:07.073 Total : 24149.60 94.33 10603.23 1282.66 59795.17 00:19:07.073 00:19:07.073 10:14:18 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:07.073 10:14:18 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 91960468-ee78-4737-8907-223e5b429d62 00:19:07.073 10:14:19 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d69d122-0de0-42e7-8751-8ec1098b869c 00:19:07.073 10:14:19 -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:07.073 10:14:19 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:07.073 10:14:19 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:07.073 10:14:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:07.073 10:14:19 -- nvmf/common.sh@116 -- # sync 00:19:07.073 10:14:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:07.073 10:14:19 -- nvmf/common.sh@119 -- # set +e 00:19:07.073 10:14:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:07.074 10:14:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:07.074 rmmod nvme_tcp 00:19:07.074 rmmod nvme_fabrics 00:19:07.074 rmmod nvme_keyring 00:19:07.074 10:14:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:07.074 10:14:19 -- nvmf/common.sh@123 -- # set -e 00:19:07.074 10:14:19 -- nvmf/common.sh@124 -- # return 0 00:19:07.074 10:14:19 -- nvmf/common.sh@477 -- # '[' -n 288592 ']' 00:19:07.074 10:14:19 -- nvmf/common.sh@478 -- # killprocess 288592 00:19:07.074 10:14:19 -- common/autotest_common.sh@926 -- # '[' -z 288592 ']' 00:19:07.074 10:14:19 -- common/autotest_common.sh@930 -- # kill -0 288592 00:19:07.074 10:14:19 -- common/autotest_common.sh@931 -- # uname 00:19:07.074 10:14:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:07.074 10:14:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 288592 00:19:07.074 10:14:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:07.074 10:14:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:07.074 10:14:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 288592' 00:19:07.074 killing process with pid 288592 00:19:07.074 10:14:19 -- common/autotest_common.sh@945 -- # kill 288592 00:19:07.074 10:14:19 -- common/autotest_common.sh@950 -- # wait 288592 00:19:07.074 10:14:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:07.074 10:14:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:07.074 10:14:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:07.074 10:14:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.074 10:14:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:07.074 10:14:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.074 10:14:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.074 10:14:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.450 10:14:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:08.450 00:19:08.450 real 0m21.331s 00:19:08.450 user 1m3.625s 00:19:08.450 sys 0m6.570s 00:19:08.450 10:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.450 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:19:08.450 ************************************ 00:19:08.450 END TEST nvmf_lvol 00:19:08.450 ************************************ 00:19:08.450 10:14:21 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:08.450 10:14:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:08.450 10:14:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:08.450 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:19:08.450 ************************************ 00:19:08.450 START TEST nvmf_lvs_grow 00:19:08.450 ************************************ 00:19:08.450 10:14:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:08.709 * Looking for test storage... 00:19:08.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.709 10:14:21 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.709 10:14:21 -- nvmf/common.sh@7 -- # uname -s 00:19:08.709 10:14:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.709 10:14:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.709 10:14:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.709 10:14:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.709 10:14:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.709 10:14:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.709 10:14:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.709 10:14:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.709 10:14:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.709 10:14:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.709 10:14:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:08.709 10:14:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:08.709 10:14:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.709 10:14:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.709 10:14:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.709 10:14:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.709 10:14:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.709 10:14:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.709 10:14:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.709 10:14:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.710 10:14:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.710 10:14:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.710 10:14:21 -- paths/export.sh@5 -- # export PATH 00:19:08.710 10:14:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.710 10:14:21 -- nvmf/common.sh@46 -- # : 0 00:19:08.710 10:14:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:08.710 10:14:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:08.710 10:14:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:08.710 10:14:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.710 10:14:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.710 10:14:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:08.710 10:14:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:08.710 10:14:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:08.710 10:14:21 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.710 10:14:21 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.710 10:14:21 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:19:08.710 10:14:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:08.710 10:14:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.710 10:14:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:08.710 10:14:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:08.710 10:14:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:08.710 10:14:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.710 10:14:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.710 10:14:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.710 10:14:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:08.710 10:14:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:08.710 10:14:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:08.710 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:19:13.981 10:14:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:13.981 10:14:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:13.981 10:14:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:13.981 10:14:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:13.981 10:14:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:13.981 10:14:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:13.981 10:14:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:13.981 10:14:27 -- nvmf/common.sh@294 -- # net_devs=() 00:19:13.981 10:14:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:13.981 10:14:27 -- nvmf/common.sh@295 -- # e810=() 00:19:13.981 10:14:27 -- nvmf/common.sh@295 -- # local -ga e810 00:19:13.981 10:14:27 -- nvmf/common.sh@296 -- # x722=() 00:19:13.981 10:14:27 -- nvmf/common.sh@296 -- # local -ga x722 00:19:13.981 10:14:27 -- nvmf/common.sh@297 -- # mlx=() 00:19:13.981 10:14:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:13.981 10:14:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.981 10:14:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:13.981 10:14:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:13.981 10:14:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:13.981 10:14:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.981 10:14:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:13.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:13.981 10:14:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.981 10:14:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:13.981 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:13.981 10:14:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:13.981 10:14:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.981 10:14:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.981 10:14:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.981 10:14:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.981 10:14:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:13.981 Found net devices under 0000:86:00.0: cvl_0_0 00:19:13.981 10:14:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.981 10:14:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.981 10:14:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.981 10:14:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.981 10:14:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.981 10:14:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:13.981 Found net devices under 0000:86:00.1: cvl_0_1 00:19:13.981 10:14:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.981 10:14:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:13.981 10:14:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:13.981 10:14:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:13.981 10:14:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:13.981 10:14:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.981 10:14:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.981 10:14:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:13.981 10:14:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:13.981 10:14:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:13.981 10:14:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:13.981 10:14:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:13.981 10:14:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:13.981 10:14:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.981 10:14:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:13.981 10:14:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:13.981 10:14:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.240 10:14:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.240 10:14:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.240 10:14:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.240 10:14:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:14.240 10:14:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.240 10:14:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.240 10:14:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.240 10:14:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:14.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:19:14.240 00:19:14.240 --- 10.0.0.2 ping statistics --- 00:19:14.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.240 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:19:14.240 10:14:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:19:14.240 00:19:14.240 --- 10.0.0.1 ping statistics --- 00:19:14.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.240 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:19:14.240 10:14:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.240 10:14:27 -- nvmf/common.sh@410 -- # return 0 00:19:14.240 10:14:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:14.240 10:14:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.240 10:14:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:14.240 10:14:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:14.240 10:14:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.240 10:14:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:14.240 10:14:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:14.499 10:14:27 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:19:14.499 10:14:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:14.499 10:14:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:14.499 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:14.499 10:14:27 -- nvmf/common.sh@469 -- # nvmfpid=294424 00:19:14.499 10:14:27 -- nvmf/common.sh@470 -- # waitforlisten 294424 00:19:14.499 10:14:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:14.499 10:14:27 -- common/autotest_common.sh@819 -- # '[' -z 294424 ']' 00:19:14.499 10:14:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.499 10:14:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:14.499 10:14:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.499 10:14:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:14.499 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:19:14.499 [2024-04-24 10:14:27.592391] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:14.499 [2024-04-24 10:14:27.592434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.499 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.499 [2024-04-24 10:14:27.650323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.499 [2024-04-24 10:14:27.724034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:14.499 [2024-04-24 10:14:27.724169] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.499 [2024-04-24 10:14:27.724177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.499 [2024-04-24 10:14:27.724183] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.499 [2024-04-24 10:14:27.724203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.434 10:14:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:15.434 10:14:28 -- common/autotest_common.sh@852 -- # return 0 00:19:15.434 10:14:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:15.434 10:14:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:15.434 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:15.434 10:14:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:15.434 [2024-04-24 10:14:28.568225] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:19:15.434 10:14:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:15.434 10:14:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:15.434 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:19:15.434 ************************************ 00:19:15.434 START TEST lvs_grow_clean 00:19:15.434 ************************************ 00:19:15.434 10:14:28 -- common/autotest_common.sh@1104 -- # lvs_grow 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:15.434 10:14:28 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:15.693 10:14:28 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:15.693 10:14:28 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:15.693 10:14:28 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:15.951 10:14:28 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:15.951 10:14:28 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:15.951 10:14:29 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:15.951 10:14:29 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:15.951 10:14:29 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 lvol 150 00:19:16.209 10:14:29 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7a341921-0627-4c94-98fb-e1050fc3bb8b 00:19:16.209 10:14:29 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:16.209 10:14:29 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:16.209 [2024-04-24 10:14:29.450259] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:16.209 [2024-04-24 10:14:29.450310] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:16.209 true 00:19:16.209 10:14:29 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:16.209 10:14:29 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:16.467 10:14:29 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:16.467 10:14:29 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:16.726 10:14:29 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7a341921-0627-4c94-98fb-e1050fc3bb8b 00:19:16.726 10:14:29 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:16.983 [2024-04-24 10:14:30.084235] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.983 10:14:30 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:17.241 10:14:30 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=294892 00:19:17.241 10:14:30 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.241 10:14:30 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 294892 /var/tmp/bdevperf.sock 00:19:17.241 10:14:30 -- common/autotest_common.sh@819 -- # '[' -z 294892 ']' 00:19:17.241 10:14:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.241 10:14:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:17.241 10:14:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.241 10:14:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:17.241 10:14:30 -- common/autotest_common.sh@10 -- # set +x 00:19:17.241 10:14:30 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:17.241 [2024-04-24 10:14:30.308597] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:17.241 [2024-04-24 10:14:30.308645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294892 ] 00:19:17.241 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.241 [2024-04-24 10:14:30.362587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.241 [2024-04-24 10:14:30.440339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.173 10:14:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:18.173 10:14:31 -- common/autotest_common.sh@852 -- # return 0 00:19:18.173 10:14:31 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:18.173 Nvme0n1 00:19:18.431 10:14:31 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:18.431 [ 00:19:18.431 { 00:19:18.431 "name": "Nvme0n1", 00:19:18.431 "aliases": [ 00:19:18.431 "7a341921-0627-4c94-98fb-e1050fc3bb8b" 00:19:18.431 ], 00:19:18.431 "product_name": "NVMe disk", 00:19:18.431 "block_size": 4096, 00:19:18.431 "num_blocks": 38912, 00:19:18.431 "uuid": "7a341921-0627-4c94-98fb-e1050fc3bb8b", 00:19:18.431 "assigned_rate_limits": { 00:19:18.431 "rw_ios_per_sec": 0, 00:19:18.431 "rw_mbytes_per_sec": 0, 00:19:18.431 "r_mbytes_per_sec": 0, 00:19:18.431 "w_mbytes_per_sec": 0 00:19:18.431 }, 00:19:18.431 "claimed": false, 00:19:18.431 "zoned": false, 00:19:18.431 "supported_io_types": { 00:19:18.431 "read": true, 00:19:18.431 "write": true, 00:19:18.431 "unmap": true, 00:19:18.431 "write_zeroes": true, 00:19:18.431 "flush": true, 00:19:18.431 "reset": true, 00:19:18.431 "compare": true, 00:19:18.431 "compare_and_write": true, 00:19:18.431 "abort": true, 00:19:18.431 "nvme_admin": true, 00:19:18.431 "nvme_io": true 00:19:18.431 }, 00:19:18.431 "driver_specific": { 00:19:18.431 "nvme": [ 00:19:18.431 { 00:19:18.431 "trid": { 00:19:18.431 "trtype": "TCP", 00:19:18.431 "adrfam": "IPv4", 00:19:18.431 "traddr": "10.0.0.2", 00:19:18.431 "trsvcid": "4420", 00:19:18.431 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:18.431 }, 00:19:18.431 "ctrlr_data": { 00:19:18.431 "cntlid": 1, 00:19:18.431 "vendor_id": "0x8086", 00:19:18.431 "model_number": "SPDK bdev Controller", 00:19:18.431 "serial_number": "SPDK0", 00:19:18.431 "firmware_revision": "24.01.1", 00:19:18.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:18.431 "oacs": { 00:19:18.431 "security": 0, 00:19:18.431 "format": 0, 00:19:18.431 "firmware": 0, 00:19:18.431 "ns_manage": 0 00:19:18.431 }, 00:19:18.431 "multi_ctrlr": true, 00:19:18.431 "ana_reporting": false 00:19:18.431 }, 00:19:18.431 "vs": { 00:19:18.431 "nvme_version": "1.3" 00:19:18.431 }, 00:19:18.431 "ns_data": { 00:19:18.431 "id": 1, 00:19:18.431 "can_share": true 00:19:18.431 } 00:19:18.431 } 00:19:18.431 ], 00:19:18.431 "mp_policy": "active_passive" 00:19:18.431 } 00:19:18.431 } 00:19:18.431 ] 00:19:18.431 10:14:31 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=295117 00:19:18.431 10:14:31 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:18.431 10:14:31 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:18.689 Running I/O for 10 seconds... 00:19:19.626 Latency(us) 00:19:19.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:19.626 Nvme0n1 : 1.00 23321.00 91.10 0.00 0.00 0.00 0.00 0.00 00:19:19.626 =================================================================================================================== 00:19:19.626 Total : 23321.00 91.10 0.00 0.00 0.00 0.00 0.00 00:19:19.626 00:19:20.559 10:14:33 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:20.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:20.559 Nvme0n1 : 2.00 23515.50 91.86 0.00 0.00 0.00 0.00 0.00 00:19:20.559 =================================================================================================================== 00:19:20.559 Total : 23515.50 91.86 0.00 0.00 0.00 0.00 0.00 00:19:20.559 00:19:20.559 true 00:19:20.559 10:14:33 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:20.559 10:14:33 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:20.816 10:14:34 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:20.816 10:14:34 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:20.816 10:14:34 -- target/nvmf_lvs_grow.sh@65 -- # wait 295117 00:19:21.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:21.805 Nvme0n1 : 3.00 23564.67 92.05 0.00 0.00 0.00 0.00 0.00 00:19:21.805 =================================================================================================================== 00:19:21.805 Total : 23564.67 92.05 0.00 0.00 0.00 0.00 0.00 00:19:21.805 00:19:22.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:22.739 Nvme0n1 : 4.00 23657.25 92.41 0.00 0.00 0.00 0.00 0.00 00:19:22.739 =================================================================================================================== 00:19:22.739 Total : 23657.25 92.41 0.00 0.00 0.00 0.00 0.00 00:19:22.739 00:19:23.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:23.673 Nvme0n1 : 5.00 23700.20 92.58 0.00 0.00 0.00 0.00 0.00 00:19:23.673 =================================================================================================================== 00:19:23.673 Total : 23700.20 92.58 0.00 0.00 0.00 0.00 0.00 00:19:23.673 00:19:24.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:24.606 Nvme0n1 : 6.00 23742.17 92.74 0.00 0.00 0.00 0.00 0.00 00:19:24.606 =================================================================================================================== 00:19:24.606 Total : 23742.17 92.74 0.00 0.00 0.00 0.00 0.00 00:19:24.606 00:19:25.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:25.540 Nvme0n1 : 7.00 23776.57 92.88 0.00 0.00 0.00 0.00 0.00 00:19:25.540 =================================================================================================================== 00:19:25.540 Total : 23776.57 92.88 0.00 0.00 0.00 0.00 0.00 00:19:25.540 00:19:26.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:26.474 Nvme0n1 : 8.00 23804.50 92.99 0.00 0.00 0.00 0.00 0.00 00:19:26.474 =================================================================================================================== 00:19:26.474 Total : 23804.50 92.99 0.00 0.00 0.00 0.00 0.00 00:19:26.474 00:19:27.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:27.846 Nvme0n1 : 9.00 23783.67 92.90 0.00 0.00 0.00 0.00 0.00 00:19:27.846 =================================================================================================================== 00:19:27.846 Total : 23783.67 92.90 0.00 0.00 0.00 0.00 0.00 00:19:27.846 00:19:28.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:28.779 Nvme0n1 : 10.00 23792.40 92.94 0.00 0.00 0.00 0.00 0.00 00:19:28.779 =================================================================================================================== 00:19:28.779 Total : 23792.40 92.94 0.00 0.00 0.00 0.00 0.00 00:19:28.779 00:19:28.779 00:19:28.779 Latency(us) 00:19:28.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:28.779 Nvme0n1 : 10.01 23791.77 92.94 0.00 0.00 5376.61 3348.03 16754.42 00:19:28.779 =================================================================================================================== 00:19:28.779 Total : 23791.77 92.94 0.00 0.00 5376.61 3348.03 16754.42 00:19:28.779 0 00:19:28.779 10:14:41 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 294892 00:19:28.779 10:14:41 -- common/autotest_common.sh@926 -- # '[' -z 294892 ']' 00:19:28.779 10:14:41 -- common/autotest_common.sh@930 -- # kill -0 294892 00:19:28.779 10:14:41 -- common/autotest_common.sh@931 -- # uname 00:19:28.779 10:14:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:28.779 10:14:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 294892 00:19:28.779 10:14:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:28.779 10:14:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:28.779 10:14:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 294892' 00:19:28.779 killing process with pid 294892 00:19:28.779 10:14:41 -- common/autotest_common.sh@945 -- # kill 294892 00:19:28.779 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.779 00:19:28.779 Latency(us) 00:19:28.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.779 =================================================================================================================== 00:19:28.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.779 10:14:41 -- common/autotest_common.sh@950 -- # wait 294892 00:19:28.779 10:14:42 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:29.037 10:14:42 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:29.037 10:14:42 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:29.294 10:14:42 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:29.294 10:14:42 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:19:29.294 10:14:42 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:29.294 [2024-04-24 10:14:42.542647] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:29.551 10:14:42 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:29.551 10:14:42 -- common/autotest_common.sh@640 -- # local es=0 00:19:29.551 10:14:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:29.551 10:14:42 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:29.551 10:14:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:29.551 10:14:42 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:29.551 10:14:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:29.551 10:14:42 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:29.551 10:14:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:29.551 10:14:42 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:29.551 10:14:42 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:29.551 10:14:42 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:29.551 request: 00:19:29.551 { 00:19:29.551 "uuid": "a5f39ae1-fd38-4e96-9e7a-40b28607bdf7", 00:19:29.551 "method": "bdev_lvol_get_lvstores", 00:19:29.551 "req_id": 1 00:19:29.551 } 00:19:29.551 Got JSON-RPC error response 00:19:29.551 response: 00:19:29.551 { 00:19:29.551 "code": -19, 00:19:29.551 "message": "No such device" 00:19:29.551 } 00:19:29.551 10:14:42 -- common/autotest_common.sh@643 -- # es=1 00:19:29.551 10:14:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:29.551 10:14:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:29.551 10:14:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:29.551 10:14:42 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:29.808 aio_bdev 00:19:29.808 10:14:42 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7a341921-0627-4c94-98fb-e1050fc3bb8b 00:19:29.808 10:14:42 -- common/autotest_common.sh@887 -- # local bdev_name=7a341921-0627-4c94-98fb-e1050fc3bb8b 00:19:29.808 10:14:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:29.808 10:14:42 -- common/autotest_common.sh@889 -- # local i 00:19:29.808 10:14:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:29.808 10:14:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:29.808 10:14:42 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:29.808 10:14:43 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7a341921-0627-4c94-98fb-e1050fc3bb8b -t 2000 00:19:30.067 [ 00:19:30.067 { 00:19:30.067 "name": "7a341921-0627-4c94-98fb-e1050fc3bb8b", 00:19:30.067 "aliases": [ 00:19:30.067 "lvs/lvol" 00:19:30.067 ], 00:19:30.067 "product_name": "Logical Volume", 00:19:30.067 "block_size": 4096, 00:19:30.067 "num_blocks": 38912, 00:19:30.067 "uuid": "7a341921-0627-4c94-98fb-e1050fc3bb8b", 00:19:30.067 "assigned_rate_limits": { 00:19:30.067 "rw_ios_per_sec": 0, 00:19:30.067 "rw_mbytes_per_sec": 0, 00:19:30.067 "r_mbytes_per_sec": 0, 00:19:30.067 "w_mbytes_per_sec": 0 00:19:30.067 }, 00:19:30.067 "claimed": false, 00:19:30.067 "zoned": false, 00:19:30.067 "supported_io_types": { 00:19:30.067 "read": true, 00:19:30.067 "write": true, 00:19:30.067 "unmap": true, 00:19:30.067 "write_zeroes": true, 00:19:30.067 "flush": false, 00:19:30.067 "reset": true, 00:19:30.067 "compare": false, 00:19:30.067 "compare_and_write": false, 00:19:30.067 "abort": false, 00:19:30.067 "nvme_admin": false, 00:19:30.067 "nvme_io": false 00:19:30.067 }, 00:19:30.067 "driver_specific": { 00:19:30.067 "lvol": { 00:19:30.067 "lvol_store_uuid": "a5f39ae1-fd38-4e96-9e7a-40b28607bdf7", 00:19:30.067 "base_bdev": "aio_bdev", 00:19:30.067 "thin_provision": false, 00:19:30.067 "snapshot": false, 00:19:30.067 "clone": false, 00:19:30.067 "esnap_clone": false 00:19:30.067 } 00:19:30.067 } 00:19:30.067 } 00:19:30.067 ] 00:19:30.067 10:14:43 -- common/autotest_common.sh@895 -- # return 0 00:19:30.067 10:14:43 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:30.067 10:14:43 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:30.325 10:14:43 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:30.325 10:14:43 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:30.325 10:14:43 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:30.325 10:14:43 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:30.325 10:14:43 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7a341921-0627-4c94-98fb-e1050fc3bb8b 00:19:30.583 10:14:43 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5f39ae1-fd38-4e96-9e7a-40b28607bdf7 00:19:30.840 10:14:43 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:30.840 10:14:44 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:30.840 00:19:30.840 real 0m15.487s 00:19:30.840 user 0m15.292s 00:19:30.840 sys 0m1.341s 00:19:30.840 10:14:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.840 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:19:30.840 ************************************ 00:19:30.840 END TEST lvs_grow_clean 00:19:30.840 ************************************ 00:19:30.840 10:14:44 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:30.840 10:14:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:30.840 10:14:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:30.840 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:19:30.840 ************************************ 00:19:30.840 START TEST lvs_grow_dirty 00:19:30.840 ************************************ 00:19:30.840 10:14:44 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:19:30.840 10:14:44 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:30.840 10:14:44 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:30.840 10:14:44 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:30.840 10:14:44 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:30.840 10:14:44 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:30.840 10:14:44 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:30.840 10:14:44 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:31.098 10:14:44 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:31.098 10:14:44 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:31.098 10:14:44 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:31.098 10:14:44 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:31.356 10:14:44 -- target/nvmf_lvs_grow.sh@28 -- # lvs=df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:31.356 10:14:44 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:31.356 10:14:44 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:31.613 10:14:44 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:31.613 10:14:44 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:31.613 10:14:44 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 lvol 150 00:19:31.613 10:14:44 -- target/nvmf_lvs_grow.sh@33 -- # lvol=8693328f-e414-46fb-ac5f-2de2171576c2 00:19:31.613 10:14:44 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:31.613 10:14:44 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:31.871 [2024-04-24 10:14:44.965541] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:31.871 [2024-04-24 10:14:44.965595] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:31.871 true 00:19:31.871 10:14:44 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:31.871 10:14:44 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:32.129 10:14:45 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:32.129 10:14:45 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:32.129 10:14:45 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8693328f-e414-46fb-ac5f-2de2171576c2 00:19:32.386 10:14:45 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:32.386 10:14:45 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:32.644 10:14:45 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=297588 00:19:32.644 10:14:45 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:32.644 10:14:45 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 297588 /var/tmp/bdevperf.sock 00:19:32.644 10:14:45 -- common/autotest_common.sh@819 -- # '[' -z 297588 ']' 00:19:32.644 10:14:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.644 10:14:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:32.644 10:14:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.644 10:14:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:32.644 10:14:45 -- common/autotest_common.sh@10 -- # set +x 00:19:32.644 10:14:45 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:32.644 [2024-04-24 10:14:45.842478] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:32.644 [2024-04-24 10:14:45.842527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297588 ] 00:19:32.644 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.644 [2024-04-24 10:14:45.895975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.901 [2024-04-24 10:14:45.973522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.464 10:14:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:33.464 10:14:46 -- common/autotest_common.sh@852 -- # return 0 00:19:33.464 10:14:46 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:34.028 Nvme0n1 00:19:34.028 10:14:47 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:34.028 [ 00:19:34.028 { 00:19:34.028 "name": "Nvme0n1", 00:19:34.028 "aliases": [ 00:19:34.028 "8693328f-e414-46fb-ac5f-2de2171576c2" 00:19:34.028 ], 00:19:34.028 "product_name": "NVMe disk", 00:19:34.028 "block_size": 4096, 00:19:34.028 "num_blocks": 38912, 00:19:34.028 "uuid": "8693328f-e414-46fb-ac5f-2de2171576c2", 00:19:34.028 "assigned_rate_limits": { 00:19:34.028 "rw_ios_per_sec": 0, 00:19:34.028 "rw_mbytes_per_sec": 0, 00:19:34.028 "r_mbytes_per_sec": 0, 00:19:34.028 "w_mbytes_per_sec": 0 00:19:34.028 }, 00:19:34.028 "claimed": false, 00:19:34.028 "zoned": false, 00:19:34.028 "supported_io_types": { 00:19:34.028 "read": true, 00:19:34.028 "write": true, 00:19:34.028 "unmap": true, 00:19:34.028 "write_zeroes": true, 00:19:34.028 "flush": true, 00:19:34.028 "reset": true, 00:19:34.028 "compare": true, 00:19:34.028 "compare_and_write": true, 00:19:34.028 "abort": true, 00:19:34.028 "nvme_admin": true, 00:19:34.028 "nvme_io": true 00:19:34.028 }, 00:19:34.028 "driver_specific": { 00:19:34.028 "nvme": [ 00:19:34.028 { 00:19:34.028 "trid": { 00:19:34.028 "trtype": "TCP", 00:19:34.028 "adrfam": "IPv4", 00:19:34.028 "traddr": "10.0.0.2", 00:19:34.028 "trsvcid": "4420", 00:19:34.028 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:34.028 }, 00:19:34.028 "ctrlr_data": { 00:19:34.028 "cntlid": 1, 00:19:34.028 "vendor_id": "0x8086", 00:19:34.028 "model_number": "SPDK bdev Controller", 00:19:34.028 "serial_number": "SPDK0", 00:19:34.028 "firmware_revision": "24.01.1", 00:19:34.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:34.028 "oacs": { 00:19:34.028 "security": 0, 00:19:34.028 "format": 0, 00:19:34.028 "firmware": 0, 00:19:34.028 "ns_manage": 0 00:19:34.028 }, 00:19:34.028 "multi_ctrlr": true, 00:19:34.028 "ana_reporting": false 00:19:34.028 }, 00:19:34.028 "vs": { 00:19:34.028 "nvme_version": "1.3" 00:19:34.028 }, 00:19:34.028 "ns_data": { 00:19:34.028 "id": 1, 00:19:34.028 "can_share": true 00:19:34.028 } 00:19:34.028 } 00:19:34.028 ], 00:19:34.028 "mp_policy": "active_passive" 00:19:34.028 } 00:19:34.028 } 00:19:34.028 ] 00:19:34.028 10:14:47 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=297826 00:19:34.028 10:14:47 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:34.028 10:14:47 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.028 Running I/O for 10 seconds... 00:19:35.452 Latency(us) 00:19:35.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:35.452 Nvme0n1 : 1.00 23435.00 91.54 0.00 0.00 0.00 0.00 0.00 00:19:35.452 =================================================================================================================== 00:19:35.452 Total : 23435.00 91.54 0.00 0.00 0.00 0.00 0.00 00:19:35.452 00:19:36.016 10:14:49 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:36.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:36.016 Nvme0n1 : 2.00 23606.00 92.21 0.00 0.00 0.00 0.00 0.00 00:19:36.016 =================================================================================================================== 00:19:36.016 Total : 23606.00 92.21 0.00 0.00 0.00 0.00 0.00 00:19:36.016 00:19:36.273 true 00:19:36.273 10:14:49 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:36.273 10:14:49 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:36.273 10:14:49 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:36.273 10:14:49 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:36.273 10:14:49 -- target/nvmf_lvs_grow.sh@65 -- # wait 297826 00:19:37.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:37.204 Nvme0n1 : 3.00 23555.33 92.01 0.00 0.00 0.00 0.00 0.00 00:19:37.204 =================================================================================================================== 00:19:37.204 Total : 23555.33 92.01 0.00 0.00 0.00 0.00 0.00 00:19:37.204 00:19:38.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:38.136 Nvme0n1 : 4.00 23586.25 92.13 0.00 0.00 0.00 0.00 0.00 00:19:38.136 =================================================================================================================== 00:19:38.136 Total : 23586.25 92.13 0.00 0.00 0.00 0.00 0.00 00:19:38.136 00:19:39.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:39.069 Nvme0n1 : 5.00 23643.40 92.36 0.00 0.00 0.00 0.00 0.00 00:19:39.069 =================================================================================================================== 00:19:39.069 Total : 23643.40 92.36 0.00 0.00 0.00 0.00 0.00 00:19:39.069 00:19:40.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:40.441 Nvme0n1 : 6.00 23702.83 92.59 0.00 0.00 0.00 0.00 0.00 00:19:40.441 =================================================================================================================== 00:19:40.441 Total : 23702.83 92.59 0.00 0.00 0.00 0.00 0.00 00:19:40.441 00:19:41.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:41.373 Nvme0n1 : 7.00 23754.43 92.79 0.00 0.00 0.00 0.00 0.00 00:19:41.373 =================================================================================================================== 00:19:41.373 Total : 23754.43 92.79 0.00 0.00 0.00 0.00 0.00 00:19:41.373 00:19:42.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:42.305 Nvme0n1 : 8.00 23795.38 92.95 0.00 0.00 0.00 0.00 0.00 00:19:42.305 =================================================================================================================== 00:19:42.305 Total : 23795.38 92.95 0.00 0.00 0.00 0.00 0.00 00:19:42.305 00:19:43.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:43.237 Nvme0n1 : 9.00 23823.22 93.06 0.00 0.00 0.00 0.00 0.00 00:19:43.237 =================================================================================================================== 00:19:43.237 Total : 23823.22 93.06 0.00 0.00 0.00 0.00 0.00 00:19:43.237 00:19:44.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:44.169 Nvme0n1 : 10.00 23844.30 93.14 0.00 0.00 0.00 0.00 0.00 00:19:44.169 =================================================================================================================== 00:19:44.169 Total : 23844.30 93.14 0.00 0.00 0.00 0.00 0.00 00:19:44.169 00:19:44.169 00:19:44.169 Latency(us) 00:19:44.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:44.169 Nvme0n1 : 10.00 23847.26 93.15 0.00 0.00 5364.47 3704.21 14075.99 00:19:44.169 =================================================================================================================== 00:19:44.169 Total : 23847.26 93.15 0.00 0.00 5364.47 3704.21 14075.99 00:19:44.169 0 00:19:44.169 10:14:57 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 297588 00:19:44.169 10:14:57 -- common/autotest_common.sh@926 -- # '[' -z 297588 ']' 00:19:44.169 10:14:57 -- common/autotest_common.sh@930 -- # kill -0 297588 00:19:44.169 10:14:57 -- common/autotest_common.sh@931 -- # uname 00:19:44.169 10:14:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:44.169 10:14:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 297588 00:19:44.169 10:14:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:44.169 10:14:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:44.169 10:14:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 297588' 00:19:44.169 killing process with pid 297588 00:19:44.170 10:14:57 -- common/autotest_common.sh@945 -- # kill 297588 00:19:44.170 Received shutdown signal, test time was about 10.000000 seconds 00:19:44.170 00:19:44.170 Latency(us) 00:19:44.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.170 =================================================================================================================== 00:19:44.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.170 10:14:57 -- common/autotest_common.sh@950 -- # wait 297588 00:19:44.427 10:14:57 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:44.685 10:14:57 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:44.685 10:14:57 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:44.685 10:14:57 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:44.685 10:14:57 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:44.685 10:14:57 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 294424 00:19:44.685 10:14:57 -- target/nvmf_lvs_grow.sh@74 -- # wait 294424 00:19:44.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 294424 Killed "${NVMF_APP[@]}" "$@" 00:19:44.945 10:14:57 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:44.945 10:14:57 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:44.945 10:14:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:44.945 10:14:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:44.945 10:14:57 -- common/autotest_common.sh@10 -- # set +x 00:19:44.945 10:14:57 -- nvmf/common.sh@469 -- # nvmfpid=299534 00:19:44.945 10:14:57 -- nvmf/common.sh@470 -- # waitforlisten 299534 00:19:44.945 10:14:57 -- common/autotest_common.sh@819 -- # '[' -z 299534 ']' 00:19:44.945 10:14:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.945 10:14:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:44.945 10:14:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.945 10:14:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:44.945 10:14:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:44.945 10:14:57 -- common/autotest_common.sh@10 -- # set +x 00:19:44.945 [2024-04-24 10:14:58.021861] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:44.945 [2024-04-24 10:14:58.021906] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.945 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.945 [2024-04-24 10:14:58.080405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.945 [2024-04-24 10:14:58.156482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:44.945 [2024-04-24 10:14:58.156588] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.945 [2024-04-24 10:14:58.156596] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.945 [2024-04-24 10:14:58.156602] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.945 [2024-04-24 10:14:58.156617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.877 10:14:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:45.877 10:14:58 -- common/autotest_common.sh@852 -- # return 0 00:19:45.877 10:14:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:45.877 10:14:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:45.877 10:14:58 -- common/autotest_common.sh@10 -- # set +x 00:19:45.877 10:14:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.877 10:14:58 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:45.877 [2024-04-24 10:14:58.993512] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:45.877 [2024-04-24 10:14:58.993591] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:45.877 [2024-04-24 10:14:58.993614] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:45.877 10:14:59 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:45.877 10:14:59 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 8693328f-e414-46fb-ac5f-2de2171576c2 00:19:45.877 10:14:59 -- common/autotest_common.sh@887 -- # local bdev_name=8693328f-e414-46fb-ac5f-2de2171576c2 00:19:45.877 10:14:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:45.877 10:14:59 -- common/autotest_common.sh@889 -- # local i 00:19:45.877 10:14:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:45.877 10:14:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:45.877 10:14:59 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:46.135 10:14:59 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8693328f-e414-46fb-ac5f-2de2171576c2 -t 2000 00:19:46.135 [ 00:19:46.135 { 00:19:46.135 "name": "8693328f-e414-46fb-ac5f-2de2171576c2", 00:19:46.135 "aliases": [ 00:19:46.135 "lvs/lvol" 00:19:46.135 ], 00:19:46.135 "product_name": "Logical Volume", 00:19:46.135 "block_size": 4096, 00:19:46.135 "num_blocks": 38912, 00:19:46.135 "uuid": "8693328f-e414-46fb-ac5f-2de2171576c2", 00:19:46.135 "assigned_rate_limits": { 00:19:46.135 "rw_ios_per_sec": 0, 00:19:46.135 "rw_mbytes_per_sec": 0, 00:19:46.135 "r_mbytes_per_sec": 0, 00:19:46.135 "w_mbytes_per_sec": 0 00:19:46.135 }, 00:19:46.135 "claimed": false, 00:19:46.135 "zoned": false, 00:19:46.135 "supported_io_types": { 00:19:46.135 "read": true, 00:19:46.135 "write": true, 00:19:46.135 "unmap": true, 00:19:46.135 "write_zeroes": true, 00:19:46.135 "flush": false, 00:19:46.135 "reset": true, 00:19:46.135 "compare": false, 00:19:46.135 "compare_and_write": false, 00:19:46.135 "abort": false, 00:19:46.135 "nvme_admin": false, 00:19:46.135 "nvme_io": false 00:19:46.135 }, 00:19:46.135 "driver_specific": { 00:19:46.135 "lvol": { 00:19:46.135 "lvol_store_uuid": "df6d0046-aaf5-46df-a064-8b9eb90ea7a4", 00:19:46.135 "base_bdev": "aio_bdev", 00:19:46.135 "thin_provision": false, 00:19:46.135 "snapshot": false, 00:19:46.135 "clone": false, 00:19:46.135 "esnap_clone": false 00:19:46.135 } 00:19:46.135 } 00:19:46.135 } 00:19:46.135 ] 00:19:46.135 10:14:59 -- common/autotest_common.sh@895 -- # return 0 00:19:46.135 10:14:59 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:46.135 10:14:59 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:46.393 10:14:59 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:46.393 10:14:59 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:46.393 10:14:59 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:46.650 10:14:59 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:46.650 10:14:59 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:46.650 [2024-04-24 10:14:59.850316] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:46.650 10:14:59 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:46.650 10:14:59 -- common/autotest_common.sh@640 -- # local es=0 00:19:46.650 10:14:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:46.650 10:14:59 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:46.650 10:14:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:46.650 10:14:59 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:46.650 10:14:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:46.650 10:14:59 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:46.650 10:14:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:46.650 10:14:59 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:46.650 10:14:59 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:46.650 10:14:59 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:46.915 request: 00:19:46.916 { 00:19:46.916 "uuid": "df6d0046-aaf5-46df-a064-8b9eb90ea7a4", 00:19:46.916 "method": "bdev_lvol_get_lvstores", 00:19:46.916 "req_id": 1 00:19:46.916 } 00:19:46.916 Got JSON-RPC error response 00:19:46.916 response: 00:19:46.916 { 00:19:46.916 "code": -19, 00:19:46.916 "message": "No such device" 00:19:46.916 } 00:19:46.916 10:15:00 -- common/autotest_common.sh@643 -- # es=1 00:19:46.916 10:15:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:46.916 10:15:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:46.916 10:15:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:46.916 10:15:00 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:47.176 aio_bdev 00:19:47.176 10:15:00 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 8693328f-e414-46fb-ac5f-2de2171576c2 00:19:47.176 10:15:00 -- common/autotest_common.sh@887 -- # local bdev_name=8693328f-e414-46fb-ac5f-2de2171576c2 00:19:47.176 10:15:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:47.176 10:15:00 -- common/autotest_common.sh@889 -- # local i 00:19:47.176 10:15:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:47.176 10:15:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:47.176 10:15:00 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:47.176 10:15:00 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8693328f-e414-46fb-ac5f-2de2171576c2 -t 2000 00:19:47.433 [ 00:19:47.433 { 00:19:47.433 "name": "8693328f-e414-46fb-ac5f-2de2171576c2", 00:19:47.433 "aliases": [ 00:19:47.433 "lvs/lvol" 00:19:47.433 ], 00:19:47.433 "product_name": "Logical Volume", 00:19:47.433 "block_size": 4096, 00:19:47.433 "num_blocks": 38912, 00:19:47.433 "uuid": "8693328f-e414-46fb-ac5f-2de2171576c2", 00:19:47.433 "assigned_rate_limits": { 00:19:47.433 "rw_ios_per_sec": 0, 00:19:47.433 "rw_mbytes_per_sec": 0, 00:19:47.433 "r_mbytes_per_sec": 0, 00:19:47.433 "w_mbytes_per_sec": 0 00:19:47.433 }, 00:19:47.433 "claimed": false, 00:19:47.433 "zoned": false, 00:19:47.433 "supported_io_types": { 00:19:47.433 "read": true, 00:19:47.433 "write": true, 00:19:47.433 "unmap": true, 00:19:47.433 "write_zeroes": true, 00:19:47.433 "flush": false, 00:19:47.433 "reset": true, 00:19:47.433 "compare": false, 00:19:47.433 "compare_and_write": false, 00:19:47.433 "abort": false, 00:19:47.433 "nvme_admin": false, 00:19:47.433 "nvme_io": false 00:19:47.433 }, 00:19:47.433 "driver_specific": { 00:19:47.433 "lvol": { 00:19:47.433 "lvol_store_uuid": "df6d0046-aaf5-46df-a064-8b9eb90ea7a4", 00:19:47.433 "base_bdev": "aio_bdev", 00:19:47.433 "thin_provision": false, 00:19:47.433 "snapshot": false, 00:19:47.433 "clone": false, 00:19:47.433 "esnap_clone": false 00:19:47.433 } 00:19:47.433 } 00:19:47.433 } 00:19:47.433 ] 00:19:47.433 10:15:00 -- common/autotest_common.sh@895 -- # return 0 00:19:47.433 10:15:00 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:47.433 10:15:00 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:47.689 10:15:00 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:47.689 10:15:00 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:47.689 10:15:00 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:47.689 10:15:00 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:47.689 10:15:00 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8693328f-e414-46fb-ac5f-2de2171576c2 00:19:47.946 10:15:01 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df6d0046-aaf5-46df-a064-8b9eb90ea7a4 00:19:48.204 10:15:01 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:48.204 10:15:01 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:48.463 00:19:48.463 real 0m17.396s 00:19:48.463 user 0m44.400s 00:19:48.463 sys 0m3.714s 00:19:48.463 10:15:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.463 10:15:01 -- common/autotest_common.sh@10 -- # set +x 00:19:48.463 ************************************ 00:19:48.463 END TEST lvs_grow_dirty 00:19:48.463 ************************************ 00:19:48.463 10:15:01 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:48.463 10:15:01 -- common/autotest_common.sh@796 -- # type=--id 00:19:48.463 10:15:01 -- common/autotest_common.sh@797 -- # id=0 00:19:48.463 10:15:01 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:48.463 10:15:01 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:48.463 10:15:01 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:48.463 10:15:01 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:48.463 10:15:01 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:48.463 10:15:01 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:48.463 nvmf_trace.0 00:19:48.463 10:15:01 -- common/autotest_common.sh@811 -- # return 0 00:19:48.463 10:15:01 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:48.463 10:15:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:48.463 10:15:01 -- nvmf/common.sh@116 -- # sync 00:19:48.463 10:15:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:48.463 10:15:01 -- nvmf/common.sh@119 -- # set +e 00:19:48.463 10:15:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:48.463 10:15:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:48.463 rmmod nvme_tcp 00:19:48.463 rmmod nvme_fabrics 00:19:48.463 rmmod nvme_keyring 00:19:48.463 10:15:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:48.463 10:15:01 -- nvmf/common.sh@123 -- # set -e 00:19:48.463 10:15:01 -- nvmf/common.sh@124 -- # return 0 00:19:48.463 10:15:01 -- nvmf/common.sh@477 -- # '[' -n 299534 ']' 00:19:48.463 10:15:01 -- nvmf/common.sh@478 -- # killprocess 299534 00:19:48.463 10:15:01 -- common/autotest_common.sh@926 -- # '[' -z 299534 ']' 00:19:48.463 10:15:01 -- common/autotest_common.sh@930 -- # kill -0 299534 00:19:48.463 10:15:01 -- common/autotest_common.sh@931 -- # uname 00:19:48.463 10:15:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:48.463 10:15:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 299534 00:19:48.463 10:15:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:48.463 10:15:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:48.463 10:15:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 299534' 00:19:48.463 killing process with pid 299534 00:19:48.463 10:15:01 -- common/autotest_common.sh@945 -- # kill 299534 00:19:48.463 10:15:01 -- common/autotest_common.sh@950 -- # wait 299534 00:19:48.722 10:15:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:48.722 10:15:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:48.722 10:15:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:48.722 10:15:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.722 10:15:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:48.722 10:15:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.722 10:15:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.722 10:15:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.258 10:15:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:51.258 00:19:51.258 real 0m42.287s 00:19:51.258 user 1m5.522s 00:19:51.258 sys 0m9.756s 00:19:51.258 10:15:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.258 10:15:03 -- common/autotest_common.sh@10 -- # set +x 00:19:51.258 ************************************ 00:19:51.258 END TEST nvmf_lvs_grow 00:19:51.258 ************************************ 00:19:51.258 10:15:04 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:51.258 10:15:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:51.258 10:15:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:51.258 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:19:51.259 ************************************ 00:19:51.259 START TEST nvmf_bdev_io_wait 00:19:51.259 ************************************ 00:19:51.259 10:15:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:51.259 * Looking for test storage... 00:19:51.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:51.259 10:15:04 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.259 10:15:04 -- nvmf/common.sh@7 -- # uname -s 00:19:51.259 10:15:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.259 10:15:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.259 10:15:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.259 10:15:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.259 10:15:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.259 10:15:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.259 10:15:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.259 10:15:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.259 10:15:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.259 10:15:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.259 10:15:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.259 10:15:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.259 10:15:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.259 10:15:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.259 10:15:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.259 10:15:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.259 10:15:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.259 10:15:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.259 10:15:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.259 10:15:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.259 10:15:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.259 10:15:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.259 10:15:04 -- paths/export.sh@5 -- # export PATH 00:19:51.259 10:15:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.259 10:15:04 -- nvmf/common.sh@46 -- # : 0 00:19:51.259 10:15:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:51.259 10:15:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:51.259 10:15:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:51.259 10:15:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.259 10:15:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.259 10:15:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:51.259 10:15:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:51.259 10:15:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:51.259 10:15:04 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:51.259 10:15:04 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:51.259 10:15:04 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:51.259 10:15:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:51.259 10:15:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.259 10:15:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:51.259 10:15:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:51.259 10:15:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:51.259 10:15:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.259 10:15:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.259 10:15:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.259 10:15:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:51.259 10:15:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:51.259 10:15:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:51.259 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:19:56.524 10:15:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:56.524 10:15:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:56.524 10:15:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:56.524 10:15:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:56.524 10:15:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:56.524 10:15:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:56.524 10:15:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:56.524 10:15:09 -- nvmf/common.sh@294 -- # net_devs=() 00:19:56.524 10:15:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:56.524 10:15:09 -- nvmf/common.sh@295 -- # e810=() 00:19:56.524 10:15:09 -- nvmf/common.sh@295 -- # local -ga e810 00:19:56.524 10:15:09 -- nvmf/common.sh@296 -- # x722=() 00:19:56.524 10:15:09 -- nvmf/common.sh@296 -- # local -ga x722 00:19:56.524 10:15:09 -- nvmf/common.sh@297 -- # mlx=() 00:19:56.524 10:15:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:56.524 10:15:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.524 10:15:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:56.524 10:15:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:56.524 10:15:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:56.524 10:15:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:56.524 10:15:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:56.524 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:56.524 10:15:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:56.524 10:15:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:56.524 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:56.524 10:15:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:56.524 10:15:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:56.524 10:15:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.524 10:15:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:56.524 10:15:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.524 10:15:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:56.524 Found net devices under 0000:86:00.0: cvl_0_0 00:19:56.524 10:15:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.524 10:15:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:56.524 10:15:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.524 10:15:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:56.524 10:15:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.524 10:15:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:56.524 Found net devices under 0000:86:00.1: cvl_0_1 00:19:56.524 10:15:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.524 10:15:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:56.524 10:15:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:56.524 10:15:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:56.524 10:15:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:56.524 10:15:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.524 10:15:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.524 10:15:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.524 10:15:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:56.524 10:15:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:56.524 10:15:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:56.524 10:15:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:56.524 10:15:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:56.524 10:15:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.524 10:15:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:56.524 10:15:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:56.524 10:15:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:56.524 10:15:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:56.524 10:15:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:56.524 10:15:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:56.524 10:15:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:56.524 10:15:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:56.524 10:15:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:56.524 10:15:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:56.524 10:15:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:56.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:19:56.524 00:19:56.524 --- 10.0.0.2 ping statistics --- 00:19:56.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.525 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:19:56.525 10:15:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:56.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:19:56.525 00:19:56.525 --- 10.0.0.1 ping statistics --- 00:19:56.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.525 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:19:56.525 10:15:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.525 10:15:09 -- nvmf/common.sh@410 -- # return 0 00:19:56.525 10:15:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:56.525 10:15:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.525 10:15:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:56.525 10:15:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:56.525 10:15:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.525 10:15:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:56.525 10:15:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:56.525 10:15:09 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:56.525 10:15:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:56.525 10:15:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:56.525 10:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:56.525 10:15:09 -- nvmf/common.sh@469 -- # nvmfpid=304181 00:19:56.525 10:15:09 -- nvmf/common.sh@470 -- # waitforlisten 304181 00:19:56.525 10:15:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:56.525 10:15:09 -- common/autotest_common.sh@819 -- # '[' -z 304181 ']' 00:19:56.525 10:15:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.525 10:15:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:56.525 10:15:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.525 10:15:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:56.525 10:15:09 -- common/autotest_common.sh@10 -- # set +x 00:19:56.525 [2024-04-24 10:15:09.469405] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:56.525 [2024-04-24 10:15:09.469451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.525 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.525 [2024-04-24 10:15:09.527933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.525 [2024-04-24 10:15:09.607515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:56.525 [2024-04-24 10:15:09.607628] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.525 [2024-04-24 10:15:09.607635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.525 [2024-04-24 10:15:09.607642] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.525 [2024-04-24 10:15:09.607687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.525 [2024-04-24 10:15:09.607792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.525 [2024-04-24 10:15:09.607875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.525 [2024-04-24 10:15:09.607876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.092 10:15:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:57.092 10:15:10 -- common/autotest_common.sh@852 -- # return 0 00:19:57.092 10:15:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:57.092 10:15:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:57.092 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:57.092 10:15:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.092 10:15:10 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:57.092 10:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.092 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:57.092 10:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.092 10:15:10 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:57.092 10:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.092 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:57.351 10:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.351 10:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.351 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:57.351 [2024-04-24 10:15:10.387108] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.351 10:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:57.351 10:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.351 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:57.351 Malloc0 00:19:57.351 10:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:57.351 10:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.351 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:57.351 10:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:57.351 10:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.351 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:57.351 10:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.351 10:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.351 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:19:57.351 [2024-04-24 10:15:10.443520] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.351 10:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=304376 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@30 -- # READ_PID=304378 00:19:57.351 10:15:10 -- nvmf/common.sh@520 -- # config=() 00:19:57.351 10:15:10 -- nvmf/common.sh@520 -- # local subsystem config 00:19:57.351 10:15:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:57.351 10:15:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:57.351 { 00:19:57.351 "params": { 00:19:57.351 "name": "Nvme$subsystem", 00:19:57.351 "trtype": "$TEST_TRANSPORT", 00:19:57.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.351 "adrfam": "ipv4", 00:19:57.351 "trsvcid": "$NVMF_PORT", 00:19:57.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.351 "hdgst": ${hdgst:-false}, 00:19:57.351 "ddgst": ${ddgst:-false} 00:19:57.351 }, 00:19:57.351 "method": "bdev_nvme_attach_controller" 00:19:57.351 } 00:19:57.351 EOF 00:19:57.351 )") 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=304380 00:19:57.351 10:15:10 -- nvmf/common.sh@520 -- # config=() 00:19:57.351 10:15:10 -- nvmf/common.sh@520 -- # local subsystem config 00:19:57.351 10:15:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:57.351 10:15:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:57.351 { 00:19:57.351 "params": { 00:19:57.351 "name": "Nvme$subsystem", 00:19:57.351 "trtype": "$TEST_TRANSPORT", 00:19:57.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.351 "adrfam": "ipv4", 00:19:57.351 "trsvcid": "$NVMF_PORT", 00:19:57.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.351 "hdgst": ${hdgst:-false}, 00:19:57.351 "ddgst": ${ddgst:-false} 00:19:57.351 }, 00:19:57.351 "method": "bdev_nvme_attach_controller" 00:19:57.351 } 00:19:57.351 EOF 00:19:57.351 )") 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=304383 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@35 -- # sync 00:19:57.351 10:15:10 -- nvmf/common.sh@542 -- # cat 00:19:57.351 10:15:10 -- nvmf/common.sh@520 -- # config=() 00:19:57.351 10:15:10 -- nvmf/common.sh@520 -- # local subsystem config 00:19:57.351 10:15:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:57.351 10:15:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:57.351 { 00:19:57.351 "params": { 00:19:57.351 "name": "Nvme$subsystem", 00:19:57.351 "trtype": "$TEST_TRANSPORT", 00:19:57.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.351 "adrfam": "ipv4", 00:19:57.351 "trsvcid": "$NVMF_PORT", 00:19:57.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.351 "hdgst": ${hdgst:-false}, 00:19:57.351 "ddgst": ${ddgst:-false} 00:19:57.351 }, 00:19:57.351 "method": "bdev_nvme_attach_controller" 00:19:57.351 } 00:19:57.351 EOF 00:19:57.351 )") 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:57.351 10:15:10 -- nvmf/common.sh@520 -- # config=() 00:19:57.351 10:15:10 -- nvmf/common.sh@542 -- # cat 00:19:57.351 10:15:10 -- nvmf/common.sh@520 -- # local subsystem config 00:19:57.351 10:15:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:57.351 10:15:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:57.351 { 00:19:57.351 "params": { 00:19:57.351 "name": "Nvme$subsystem", 00:19:57.351 "trtype": "$TEST_TRANSPORT", 00:19:57.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.351 "adrfam": "ipv4", 00:19:57.351 "trsvcid": "$NVMF_PORT", 00:19:57.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.351 "hdgst": ${hdgst:-false}, 00:19:57.351 "ddgst": ${ddgst:-false} 00:19:57.351 }, 00:19:57.351 "method": "bdev_nvme_attach_controller" 00:19:57.351 } 00:19:57.351 EOF 00:19:57.351 )") 00:19:57.351 10:15:10 -- nvmf/common.sh@542 -- # cat 00:19:57.351 10:15:10 -- target/bdev_io_wait.sh@37 -- # wait 304376 00:19:57.351 10:15:10 -- nvmf/common.sh@544 -- # jq . 00:19:57.351 10:15:10 -- nvmf/common.sh@542 -- # cat 00:19:57.351 10:15:10 -- nvmf/common.sh@544 -- # jq . 00:19:57.351 10:15:10 -- nvmf/common.sh@545 -- # IFS=, 00:19:57.351 10:15:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:57.351 "params": { 00:19:57.351 "name": "Nvme1", 00:19:57.351 "trtype": "tcp", 00:19:57.351 "traddr": "10.0.0.2", 00:19:57.351 "adrfam": "ipv4", 00:19:57.351 "trsvcid": "4420", 00:19:57.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.351 "hdgst": false, 00:19:57.351 "ddgst": false 00:19:57.351 }, 00:19:57.351 "method": "bdev_nvme_attach_controller" 00:19:57.352 }' 00:19:57.352 10:15:10 -- nvmf/common.sh@544 -- # jq . 00:19:57.352 10:15:10 -- nvmf/common.sh@544 -- # jq . 00:19:57.352 10:15:10 -- nvmf/common.sh@545 -- # IFS=, 00:19:57.352 10:15:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:57.352 "params": { 00:19:57.352 "name": "Nvme1", 00:19:57.352 "trtype": "tcp", 00:19:57.352 "traddr": "10.0.0.2", 00:19:57.352 "adrfam": "ipv4", 00:19:57.352 "trsvcid": "4420", 00:19:57.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.352 "hdgst": false, 00:19:57.352 "ddgst": false 00:19:57.352 }, 00:19:57.352 "method": "bdev_nvme_attach_controller" 00:19:57.352 }' 00:19:57.352 10:15:10 -- nvmf/common.sh@545 -- # IFS=, 00:19:57.352 10:15:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:57.352 "params": { 00:19:57.352 "name": "Nvme1", 00:19:57.352 "trtype": "tcp", 00:19:57.352 "traddr": "10.0.0.2", 00:19:57.352 "adrfam": "ipv4", 00:19:57.352 "trsvcid": "4420", 00:19:57.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.352 "hdgst": false, 00:19:57.352 "ddgst": false 00:19:57.352 }, 00:19:57.352 "method": "bdev_nvme_attach_controller" 00:19:57.352 }' 00:19:57.352 10:15:10 -- nvmf/common.sh@545 -- # IFS=, 00:19:57.352 10:15:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:57.352 "params": { 00:19:57.352 "name": "Nvme1", 00:19:57.352 "trtype": "tcp", 00:19:57.352 "traddr": "10.0.0.2", 00:19:57.352 "adrfam": "ipv4", 00:19:57.352 "trsvcid": "4420", 00:19:57.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.352 "hdgst": false, 00:19:57.352 "ddgst": false 00:19:57.352 }, 00:19:57.352 "method": "bdev_nvme_attach_controller" 00:19:57.352 }' 00:19:57.352 [2024-04-24 10:15:10.490568] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:57.352 [2024-04-24 10:15:10.490568] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:57.352 [2024-04-24 10:15:10.490619] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 10:15:10.490620] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:57.352 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:57.352 [2024-04-24 10:15:10.490997] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:57.352 [2024-04-24 10:15:10.491033] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:57.352 [2024-04-24 10:15:10.492503] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:57.352 [2024-04-24 10:15:10.492546] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:57.352 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.609 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.609 [2024-04-24 10:15:10.678580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.609 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.609 [2024-04-24 10:15:10.751725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:57.609 [2024-04-24 10:15:10.771178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.609 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.609 [2024-04-24 10:15:10.847151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:57.609 [2024-04-24 10:15:10.863519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.867 [2024-04-24 10:15:10.923438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.867 [2024-04-24 10:15:10.949478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:57.867 [2024-04-24 10:15:10.999004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:57.867 Running I/O for 1 seconds... 00:19:57.867 Running I/O for 1 seconds... 00:19:58.128 Running I/O for 1 seconds... 00:19:58.128 Running I/O for 1 seconds... 00:19:59.117 00:19:59.117 Latency(us) 00:19:59.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.117 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:59.117 Nvme1n1 : 1.01 14030.43 54.81 0.00 0.00 9096.94 5043.42 17552.25 00:19:59.117 =================================================================================================================== 00:19:59.117 Total : 14030.43 54.81 0.00 0.00 9096.94 5043.42 17552.25 00:19:59.117 00:19:59.117 Latency(us) 00:19:59.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.117 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:59.117 Nvme1n1 : 1.00 251356.18 981.86 0.00 0.00 506.66 204.80 619.74 00:19:59.117 =================================================================================================================== 00:19:59.117 Total : 251356.18 981.86 0.00 0.00 506.66 204.80 619.74 00:19:59.117 00:19:59.117 Latency(us) 00:19:59.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.117 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:59.117 Nvme1n1 : 1.01 10419.73 40.70 0.00 0.00 12243.82 2564.45 16526.47 00:19:59.117 =================================================================================================================== 00:19:59.117 Total : 10419.73 40.70 0.00 0.00 12243.82 2564.45 16526.47 00:19:59.117 00:19:59.117 Latency(us) 00:19:59.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.117 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:59.117 Nvme1n1 : 1.01 10659.30 41.64 0.00 0.00 11973.82 5442.34 23592.96 00:19:59.117 =================================================================================================================== 00:19:59.117 Total : 10659.30 41.64 0.00 0.00 11973.82 5442.34 23592.96 00:19:59.376 10:15:12 -- target/bdev_io_wait.sh@38 -- # wait 304378 00:19:59.376 10:15:12 -- target/bdev_io_wait.sh@39 -- # wait 304380 00:19:59.376 10:15:12 -- target/bdev_io_wait.sh@40 -- # wait 304383 00:19:59.376 10:15:12 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.376 10:15:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.376 10:15:12 -- common/autotest_common.sh@10 -- # set +x 00:19:59.376 10:15:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.376 10:15:12 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:59.376 10:15:12 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:59.376 10:15:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:59.376 10:15:12 -- nvmf/common.sh@116 -- # sync 00:19:59.376 10:15:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:59.376 10:15:12 -- nvmf/common.sh@119 -- # set +e 00:19:59.376 10:15:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:59.376 10:15:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:59.376 rmmod nvme_tcp 00:19:59.376 rmmod nvme_fabrics 00:19:59.376 rmmod nvme_keyring 00:19:59.376 10:15:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:59.376 10:15:12 -- nvmf/common.sh@123 -- # set -e 00:19:59.376 10:15:12 -- nvmf/common.sh@124 -- # return 0 00:19:59.376 10:15:12 -- nvmf/common.sh@477 -- # '[' -n 304181 ']' 00:19:59.376 10:15:12 -- nvmf/common.sh@478 -- # killprocess 304181 00:19:59.376 10:15:12 -- common/autotest_common.sh@926 -- # '[' -z 304181 ']' 00:19:59.376 10:15:12 -- common/autotest_common.sh@930 -- # kill -0 304181 00:19:59.376 10:15:12 -- common/autotest_common.sh@931 -- # uname 00:19:59.376 10:15:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:59.376 10:15:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 304181 00:19:59.376 10:15:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:59.376 10:15:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:59.376 10:15:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 304181' 00:19:59.376 killing process with pid 304181 00:19:59.376 10:15:12 -- common/autotest_common.sh@945 -- # kill 304181 00:19:59.376 10:15:12 -- common/autotest_common.sh@950 -- # wait 304181 00:19:59.634 10:15:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:59.635 10:15:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:59.635 10:15:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:59.635 10:15:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.635 10:15:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:59.635 10:15:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.635 10:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.635 10:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.168 10:15:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:02.168 00:20:02.168 real 0m10.894s 00:20:02.168 user 0m19.861s 00:20:02.168 sys 0m5.727s 00:20:02.168 10:15:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.168 10:15:14 -- common/autotest_common.sh@10 -- # set +x 00:20:02.168 ************************************ 00:20:02.168 END TEST nvmf_bdev_io_wait 00:20:02.168 ************************************ 00:20:02.168 10:15:14 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:02.168 10:15:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:02.168 10:15:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.168 10:15:14 -- common/autotest_common.sh@10 -- # set +x 00:20:02.168 ************************************ 00:20:02.168 START TEST nvmf_queue_depth 00:20:02.168 ************************************ 00:20:02.168 10:15:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:02.168 * Looking for test storage... 00:20:02.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.168 10:15:15 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.168 10:15:15 -- nvmf/common.sh@7 -- # uname -s 00:20:02.168 10:15:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.168 10:15:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.168 10:15:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.168 10:15:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.168 10:15:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.168 10:15:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.168 10:15:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.168 10:15:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.168 10:15:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.169 10:15:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.169 10:15:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.169 10:15:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.169 10:15:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.169 10:15:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.169 10:15:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.169 10:15:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.169 10:15:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.169 10:15:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.169 10:15:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.169 10:15:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.169 10:15:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.169 10:15:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.169 10:15:15 -- paths/export.sh@5 -- # export PATH 00:20:02.169 10:15:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.169 10:15:15 -- nvmf/common.sh@46 -- # : 0 00:20:02.169 10:15:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:02.169 10:15:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:02.169 10:15:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:02.169 10:15:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.169 10:15:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.169 10:15:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:02.169 10:15:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:02.169 10:15:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:02.169 10:15:15 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:02.169 10:15:15 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:02.169 10:15:15 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:02.169 10:15:15 -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:02.169 10:15:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:02.169 10:15:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.169 10:15:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:02.169 10:15:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:02.169 10:15:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:02.169 10:15:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.169 10:15:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.169 10:15:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.169 10:15:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:02.169 10:15:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:02.169 10:15:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:02.169 10:15:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.439 10:15:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:07.439 10:15:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:07.439 10:15:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:07.439 10:15:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:07.439 10:15:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:07.439 10:15:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:07.439 10:15:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:07.439 10:15:20 -- nvmf/common.sh@294 -- # net_devs=() 00:20:07.439 10:15:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:07.439 10:15:20 -- nvmf/common.sh@295 -- # e810=() 00:20:07.439 10:15:20 -- nvmf/common.sh@295 -- # local -ga e810 00:20:07.439 10:15:20 -- nvmf/common.sh@296 -- # x722=() 00:20:07.439 10:15:20 -- nvmf/common.sh@296 -- # local -ga x722 00:20:07.439 10:15:20 -- nvmf/common.sh@297 -- # mlx=() 00:20:07.439 10:15:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:07.439 10:15:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.439 10:15:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:07.439 10:15:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:07.439 10:15:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:07.439 10:15:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:07.439 10:15:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:07.439 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:07.439 10:15:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:07.439 10:15:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:07.439 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:07.439 10:15:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:07.439 10:15:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:07.439 10:15:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.439 10:15:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:07.439 10:15:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.439 10:15:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:07.439 Found net devices under 0000:86:00.0: cvl_0_0 00:20:07.439 10:15:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.439 10:15:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:07.439 10:15:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.439 10:15:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:07.439 10:15:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.439 10:15:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:07.439 Found net devices under 0000:86:00.1: cvl_0_1 00:20:07.439 10:15:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.439 10:15:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:07.439 10:15:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:07.439 10:15:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:07.439 10:15:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:07.439 10:15:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.439 10:15:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.439 10:15:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.439 10:15:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:07.439 10:15:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.439 10:15:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.439 10:15:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:07.439 10:15:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.439 10:15:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.439 10:15:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:07.439 10:15:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:07.439 10:15:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.439 10:15:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.439 10:15:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.439 10:15:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.439 10:15:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:07.439 10:15:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.439 10:15:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.439 10:15:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.439 10:15:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:07.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:20:07.439 00:20:07.440 --- 10.0.0.2 ping statistics --- 00:20:07.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.440 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:07.440 10:15:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:20:07.440 00:20:07.440 --- 10.0.0.1 ping statistics --- 00:20:07.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.440 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:20:07.440 10:15:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.440 10:15:20 -- nvmf/common.sh@410 -- # return 0 00:20:07.440 10:15:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:07.440 10:15:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.440 10:15:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:07.440 10:15:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:07.440 10:15:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.440 10:15:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:07.440 10:15:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:07.440 10:15:20 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:07.440 10:15:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:07.440 10:15:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:07.440 10:15:20 -- common/autotest_common.sh@10 -- # set +x 00:20:07.440 10:15:20 -- nvmf/common.sh@469 -- # nvmfpid=308176 00:20:07.440 10:15:20 -- nvmf/common.sh@470 -- # waitforlisten 308176 00:20:07.440 10:15:20 -- common/autotest_common.sh@819 -- # '[' -z 308176 ']' 00:20:07.440 10:15:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.440 10:15:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:07.440 10:15:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.440 10:15:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:07.440 10:15:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:07.440 10:15:20 -- common/autotest_common.sh@10 -- # set +x 00:20:07.440 [2024-04-24 10:15:20.363771] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:07.440 [2024-04-24 10:15:20.363813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.440 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.440 [2024-04-24 10:15:20.420990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.440 [2024-04-24 10:15:20.500656] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:07.440 [2024-04-24 10:15:20.500761] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.440 [2024-04-24 10:15:20.500768] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.440 [2024-04-24 10:15:20.500775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.440 [2024-04-24 10:15:20.500789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.038 10:15:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:08.038 10:15:21 -- common/autotest_common.sh@852 -- # return 0 00:20:08.038 10:15:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:08.038 10:15:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:08.038 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:08.038 10:15:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.038 10:15:21 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:08.038 10:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.038 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:08.038 [2024-04-24 10:15:21.201095] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.038 10:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.038 10:15:21 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:08.038 10:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.038 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:08.038 Malloc0 00:20:08.038 10:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.038 10:15:21 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:08.038 10:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.038 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:08.038 10:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.038 10:15:21 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:08.038 10:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.038 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:08.038 10:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.038 10:15:21 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.038 10:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.038 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:08.038 [2024-04-24 10:15:21.256854] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.038 10:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.038 10:15:21 -- target/queue_depth.sh@30 -- # bdevperf_pid=308422 00:20:08.038 10:15:21 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:08.038 10:15:21 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:08.038 10:15:21 -- target/queue_depth.sh@33 -- # waitforlisten 308422 /var/tmp/bdevperf.sock 00:20:08.038 10:15:21 -- common/autotest_common.sh@819 -- # '[' -z 308422 ']' 00:20:08.038 10:15:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.038 10:15:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:08.038 10:15:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.038 10:15:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:08.038 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:08.038 [2024-04-24 10:15:21.302089] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:08.038 [2024-04-24 10:15:21.302127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308422 ] 00:20:08.296 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.296 [2024-04-24 10:15:21.354788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.296 [2024-04-24 10:15:21.426132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.864 10:15:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:08.864 10:15:22 -- common/autotest_common.sh@852 -- # return 0 00:20:08.864 10:15:22 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:08.864 10:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.864 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:20:09.123 NVMe0n1 00:20:09.123 10:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:09.123 10:15:22 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.123 Running I/O for 10 seconds... 00:20:19.106 00:20:19.106 Latency(us) 00:20:19.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.106 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:19.106 Verification LBA range: start 0x0 length 0x4000 00:20:19.106 NVMe0n1 : 10.05 18363.65 71.73 0.00 0.00 55601.74 10770.70 41715.09 00:20:19.106 =================================================================================================================== 00:20:19.106 Total : 18363.65 71.73 0.00 0.00 55601.74 10770.70 41715.09 00:20:19.106 0 00:20:19.364 10:15:32 -- target/queue_depth.sh@39 -- # killprocess 308422 00:20:19.364 10:15:32 -- common/autotest_common.sh@926 -- # '[' -z 308422 ']' 00:20:19.364 10:15:32 -- common/autotest_common.sh@930 -- # kill -0 308422 00:20:19.364 10:15:32 -- common/autotest_common.sh@931 -- # uname 00:20:19.364 10:15:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:19.364 10:15:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 308422 00:20:19.364 10:15:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:19.364 10:15:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:19.364 10:15:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 308422' 00:20:19.364 killing process with pid 308422 00:20:19.364 10:15:32 -- common/autotest_common.sh@945 -- # kill 308422 00:20:19.364 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.364 00:20:19.364 Latency(us) 00:20:19.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.364 =================================================================================================================== 00:20:19.364 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.364 10:15:32 -- common/autotest_common.sh@950 -- # wait 308422 00:20:19.364 10:15:32 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:19.364 10:15:32 -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:19.364 10:15:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:19.364 10:15:32 -- nvmf/common.sh@116 -- # sync 00:20:19.622 10:15:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:19.622 10:15:32 -- nvmf/common.sh@119 -- # set +e 00:20:19.622 10:15:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:19.622 10:15:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:19.622 rmmod nvme_tcp 00:20:19.622 rmmod nvme_fabrics 00:20:19.622 rmmod nvme_keyring 00:20:19.622 10:15:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:19.622 10:15:32 -- nvmf/common.sh@123 -- # set -e 00:20:19.622 10:15:32 -- nvmf/common.sh@124 -- # return 0 00:20:19.622 10:15:32 -- nvmf/common.sh@477 -- # '[' -n 308176 ']' 00:20:19.622 10:15:32 -- nvmf/common.sh@478 -- # killprocess 308176 00:20:19.622 10:15:32 -- common/autotest_common.sh@926 -- # '[' -z 308176 ']' 00:20:19.622 10:15:32 -- common/autotest_common.sh@930 -- # kill -0 308176 00:20:19.622 10:15:32 -- common/autotest_common.sh@931 -- # uname 00:20:19.622 10:15:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:19.622 10:15:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 308176 00:20:19.622 10:15:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:19.622 10:15:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:19.622 10:15:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 308176' 00:20:19.622 killing process with pid 308176 00:20:19.622 10:15:32 -- common/autotest_common.sh@945 -- # kill 308176 00:20:19.622 10:15:32 -- common/autotest_common.sh@950 -- # wait 308176 00:20:19.881 10:15:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:19.881 10:15:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:19.881 10:15:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:19.881 10:15:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:19.881 10:15:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:19.881 10:15:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.881 10:15:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.881 10:15:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.785 10:15:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:21.785 00:20:21.785 real 0m20.102s 00:20:21.785 user 0m24.569s 00:20:21.785 sys 0m5.706s 00:20:21.785 10:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.785 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:20:21.785 ************************************ 00:20:21.785 END TEST nvmf_queue_depth 00:20:21.785 ************************************ 00:20:22.043 10:15:35 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:22.043 10:15:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:22.043 10:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:22.043 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:20:22.043 ************************************ 00:20:22.043 START TEST nvmf_multipath 00:20:22.043 ************************************ 00:20:22.043 10:15:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:22.043 * Looking for test storage... 00:20:22.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:22.043 10:15:35 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.043 10:15:35 -- nvmf/common.sh@7 -- # uname -s 00:20:22.043 10:15:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.043 10:15:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.043 10:15:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.043 10:15:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.043 10:15:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.043 10:15:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.043 10:15:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.043 10:15:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.043 10:15:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.043 10:15:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.043 10:15:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.043 10:15:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.043 10:15:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.043 10:15:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.043 10:15:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.043 10:15:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.043 10:15:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.043 10:15:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.043 10:15:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.043 10:15:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.043 10:15:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.043 10:15:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.043 10:15:35 -- paths/export.sh@5 -- # export PATH 00:20:22.043 10:15:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.043 10:15:35 -- nvmf/common.sh@46 -- # : 0 00:20:22.043 10:15:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:22.043 10:15:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:22.043 10:15:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:22.043 10:15:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.043 10:15:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.043 10:15:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:22.043 10:15:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:22.043 10:15:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:22.043 10:15:35 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:22.043 10:15:35 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:22.043 10:15:35 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:22.043 10:15:35 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:22.043 10:15:35 -- target/multipath.sh@43 -- # nvmftestinit 00:20:22.043 10:15:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:22.043 10:15:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.043 10:15:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:22.043 10:15:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:22.043 10:15:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:22.043 10:15:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.043 10:15:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.043 10:15:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.044 10:15:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:22.044 10:15:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:22.044 10:15:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:22.044 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:20:27.308 10:15:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:27.308 10:15:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:27.308 10:15:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:27.308 10:15:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:27.308 10:15:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:27.308 10:15:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:27.308 10:15:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:27.308 10:15:40 -- nvmf/common.sh@294 -- # net_devs=() 00:20:27.308 10:15:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:27.308 10:15:40 -- nvmf/common.sh@295 -- # e810=() 00:20:27.308 10:15:40 -- nvmf/common.sh@295 -- # local -ga e810 00:20:27.308 10:15:40 -- nvmf/common.sh@296 -- # x722=() 00:20:27.308 10:15:40 -- nvmf/common.sh@296 -- # local -ga x722 00:20:27.308 10:15:40 -- nvmf/common.sh@297 -- # mlx=() 00:20:27.308 10:15:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:27.308 10:15:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.308 10:15:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:27.308 10:15:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:27.308 10:15:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:27.308 10:15:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:27.308 10:15:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:27.308 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:27.308 10:15:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:27.308 10:15:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:27.308 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:27.308 10:15:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:27.308 10:15:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:27.308 10:15:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.308 10:15:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:27.308 10:15:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.308 10:15:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:27.308 Found net devices under 0000:86:00.0: cvl_0_0 00:20:27.308 10:15:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.308 10:15:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:27.308 10:15:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.308 10:15:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:27.308 10:15:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.308 10:15:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:27.308 Found net devices under 0000:86:00.1: cvl_0_1 00:20:27.308 10:15:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.308 10:15:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:27.308 10:15:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:27.308 10:15:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:27.308 10:15:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:27.308 10:15:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.308 10:15:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.308 10:15:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.308 10:15:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:27.308 10:15:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:27.308 10:15:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:27.308 10:15:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:27.308 10:15:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:27.308 10:15:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.308 10:15:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:27.308 10:15:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:27.308 10:15:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:27.309 10:15:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:27.309 10:15:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:27.309 10:15:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:27.309 10:15:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:27.309 10:15:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:27.309 10:15:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:27.309 10:15:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:27.309 10:15:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:27.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:20:27.309 00:20:27.309 --- 10.0.0.2 ping statistics --- 00:20:27.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.309 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:20:27.309 10:15:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:27.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:20:27.309 00:20:27.309 --- 10.0.0.1 ping statistics --- 00:20:27.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.309 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:27.309 10:15:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.309 10:15:40 -- nvmf/common.sh@410 -- # return 0 00:20:27.309 10:15:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:27.309 10:15:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.309 10:15:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:27.309 10:15:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:27.309 10:15:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.309 10:15:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:27.309 10:15:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:27.309 10:15:40 -- target/multipath.sh@45 -- # '[' -z ']' 00:20:27.309 10:15:40 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:20:27.309 only one NIC for nvmf test 00:20:27.309 10:15:40 -- target/multipath.sh@47 -- # nvmftestfini 00:20:27.309 10:15:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:27.309 10:15:40 -- nvmf/common.sh@116 -- # sync 00:20:27.309 10:15:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:27.309 10:15:40 -- nvmf/common.sh@119 -- # set +e 00:20:27.309 10:15:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:27.309 10:15:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:27.309 rmmod nvme_tcp 00:20:27.309 rmmod nvme_fabrics 00:20:27.309 rmmod nvme_keyring 00:20:27.309 10:15:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:27.309 10:15:40 -- nvmf/common.sh@123 -- # set -e 00:20:27.309 10:15:40 -- nvmf/common.sh@124 -- # return 0 00:20:27.309 10:15:40 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:20:27.309 10:15:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:27.309 10:15:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:27.309 10:15:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:27.309 10:15:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.309 10:15:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:27.309 10:15:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.309 10:15:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.309 10:15:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.213 10:15:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:29.213 10:15:42 -- target/multipath.sh@48 -- # exit 0 00:20:29.213 10:15:42 -- target/multipath.sh@1 -- # nvmftestfini 00:20:29.213 10:15:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:29.213 10:15:42 -- nvmf/common.sh@116 -- # sync 00:20:29.213 10:15:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:29.213 10:15:42 -- nvmf/common.sh@119 -- # set +e 00:20:29.213 10:15:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:29.213 10:15:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:29.213 10:15:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:29.213 10:15:42 -- nvmf/common.sh@123 -- # set -e 00:20:29.213 10:15:42 -- nvmf/common.sh@124 -- # return 0 00:20:29.213 10:15:42 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:20:29.213 10:15:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:29.213 10:15:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:29.213 10:15:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:29.213 10:15:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.213 10:15:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:29.213 10:15:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.213 10:15:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.213 10:15:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.472 10:15:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:29.472 00:20:29.472 real 0m7.410s 00:20:29.472 user 0m1.436s 00:20:29.472 sys 0m3.913s 00:20:29.472 10:15:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.472 10:15:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.472 ************************************ 00:20:29.472 END TEST nvmf_multipath 00:20:29.472 ************************************ 00:20:29.472 10:15:42 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:29.472 10:15:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:29.472 10:15:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:29.472 10:15:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.472 ************************************ 00:20:29.472 START TEST nvmf_zcopy 00:20:29.472 ************************************ 00:20:29.472 10:15:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:29.472 * Looking for test storage... 00:20:29.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:29.472 10:15:42 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.472 10:15:42 -- nvmf/common.sh@7 -- # uname -s 00:20:29.472 10:15:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.472 10:15:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.472 10:15:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.472 10:15:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.472 10:15:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.472 10:15:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.472 10:15:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.472 10:15:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.472 10:15:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.472 10:15:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.472 10:15:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:29.472 10:15:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:29.472 10:15:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.472 10:15:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.472 10:15:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.472 10:15:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.472 10:15:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.472 10:15:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.472 10:15:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.472 10:15:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.472 10:15:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.472 10:15:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.472 10:15:42 -- paths/export.sh@5 -- # export PATH 00:20:29.472 10:15:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.472 10:15:42 -- nvmf/common.sh@46 -- # : 0 00:20:29.472 10:15:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:29.472 10:15:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:29.472 10:15:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:29.472 10:15:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.472 10:15:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.472 10:15:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:29.472 10:15:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:29.472 10:15:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:29.472 10:15:42 -- target/zcopy.sh@12 -- # nvmftestinit 00:20:29.472 10:15:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:29.472 10:15:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.472 10:15:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:29.472 10:15:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:29.472 10:15:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:29.472 10:15:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.472 10:15:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.472 10:15:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.472 10:15:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:29.472 10:15:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:29.472 10:15:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:29.472 10:15:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.742 10:15:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.742 10:15:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:34.742 10:15:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:34.742 10:15:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:34.742 10:15:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:34.742 10:15:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:34.742 10:15:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:34.742 10:15:48 -- nvmf/common.sh@294 -- # net_devs=() 00:20:34.742 10:15:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:34.742 10:15:48 -- nvmf/common.sh@295 -- # e810=() 00:20:34.742 10:15:48 -- nvmf/common.sh@295 -- # local -ga e810 00:20:34.742 10:15:48 -- nvmf/common.sh@296 -- # x722=() 00:20:34.742 10:15:48 -- nvmf/common.sh@296 -- # local -ga x722 00:20:34.742 10:15:48 -- nvmf/common.sh@297 -- # mlx=() 00:20:34.742 10:15:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:34.742 10:15:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.742 10:15:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:34.742 10:15:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:34.742 10:15:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:34.742 10:15:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:34.742 10:15:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:34.742 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:34.742 10:15:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:34.742 10:15:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:34.742 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:34.742 10:15:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:34.742 10:15:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:34.742 10:15:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.742 10:15:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:34.742 10:15:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.742 10:15:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:34.742 Found net devices under 0000:86:00.0: cvl_0_0 00:20:34.742 10:15:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.742 10:15:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:34.742 10:15:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.742 10:15:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:34.742 10:15:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.742 10:15:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:34.742 Found net devices under 0000:86:00.1: cvl_0_1 00:20:34.742 10:15:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.742 10:15:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:34.742 10:15:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:34.742 10:15:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:34.742 10:15:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:34.742 10:15:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.742 10:15:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.742 10:15:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.742 10:15:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:34.742 10:15:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.742 10:15:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.742 10:15:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:34.742 10:15:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.742 10:15:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.742 10:15:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:35.002 10:15:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:35.002 10:15:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.002 10:15:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.002 10:15:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.002 10:15:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.002 10:15:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:35.002 10:15:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.002 10:15:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.002 10:15:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.002 10:15:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:35.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:20:35.002 00:20:35.002 --- 10.0.0.2 ping statistics --- 00:20:35.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.002 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:20:35.002 10:15:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:20:35.002 00:20:35.002 --- 10.0.0.1 ping statistics --- 00:20:35.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.002 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:20:35.002 10:15:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.002 10:15:48 -- nvmf/common.sh@410 -- # return 0 00:20:35.002 10:15:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:35.002 10:15:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.002 10:15:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:35.002 10:15:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:35.002 10:15:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.002 10:15:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:35.002 10:15:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:35.262 10:15:48 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:35.262 10:15:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:35.262 10:15:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:35.262 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:20:35.262 10:15:48 -- nvmf/common.sh@469 -- # nvmfpid=317132 00:20:35.262 10:15:48 -- nvmf/common.sh@470 -- # waitforlisten 317132 00:20:35.262 10:15:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:35.262 10:15:48 -- common/autotest_common.sh@819 -- # '[' -z 317132 ']' 00:20:35.262 10:15:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.262 10:15:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:35.262 10:15:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.262 10:15:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:35.262 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:20:35.262 [2024-04-24 10:15:48.339454] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:35.262 [2024-04-24 10:15:48.339494] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.262 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.262 [2024-04-24 10:15:48.396224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.262 [2024-04-24 10:15:48.468000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:35.262 [2024-04-24 10:15:48.468112] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.262 [2024-04-24 10:15:48.468120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.262 [2024-04-24 10:15:48.468127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.262 [2024-04-24 10:15:48.468143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.199 10:15:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:36.199 10:15:49 -- common/autotest_common.sh@852 -- # return 0 00:20:36.199 10:15:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:36.199 10:15:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:36.199 10:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.199 10:15:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.199 10:15:49 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:20:36.199 10:15:49 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:20:36.199 10:15:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.199 10:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.199 [2024-04-24 10:15:49.171834] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.199 10:15:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.199 10:15:49 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:36.199 10:15:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.199 10:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.199 10:15:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.199 10:15:49 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.199 10:15:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.199 10:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.199 [2024-04-24 10:15:49.188015] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.199 10:15:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.199 10:15:49 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:36.199 10:15:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.199 10:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.199 10:15:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.199 10:15:49 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:20:36.199 10:15:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.199 10:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.199 malloc0 00:20:36.199 10:15:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.199 10:15:49 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.199 10:15:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.199 10:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.199 10:15:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.199 10:15:49 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:20:36.199 10:15:49 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:20:36.199 10:15:49 -- nvmf/common.sh@520 -- # config=() 00:20:36.199 10:15:49 -- nvmf/common.sh@520 -- # local subsystem config 00:20:36.199 10:15:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:36.199 10:15:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:36.199 { 00:20:36.199 "params": { 00:20:36.199 "name": "Nvme$subsystem", 00:20:36.199 "trtype": "$TEST_TRANSPORT", 00:20:36.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.199 "adrfam": "ipv4", 00:20:36.199 "trsvcid": "$NVMF_PORT", 00:20:36.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.199 "hdgst": ${hdgst:-false}, 00:20:36.199 "ddgst": ${ddgst:-false} 00:20:36.199 }, 00:20:36.199 "method": "bdev_nvme_attach_controller" 00:20:36.199 } 00:20:36.199 EOF 00:20:36.199 )") 00:20:36.199 10:15:49 -- nvmf/common.sh@542 -- # cat 00:20:36.199 10:15:49 -- nvmf/common.sh@544 -- # jq . 00:20:36.199 10:15:49 -- nvmf/common.sh@545 -- # IFS=, 00:20:36.199 10:15:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:36.199 "params": { 00:20:36.199 "name": "Nvme1", 00:20:36.199 "trtype": "tcp", 00:20:36.199 "traddr": "10.0.0.2", 00:20:36.199 "adrfam": "ipv4", 00:20:36.199 "trsvcid": "4420", 00:20:36.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.199 "hdgst": false, 00:20:36.199 "ddgst": false 00:20:36.199 }, 00:20:36.199 "method": "bdev_nvme_attach_controller" 00:20:36.199 }' 00:20:36.199 [2024-04-24 10:15:49.262170] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:36.199 [2024-04-24 10:15:49.262229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317383 ] 00:20:36.199 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.199 [2024-04-24 10:15:49.315850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.199 [2024-04-24 10:15:49.391676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.458 Running I/O for 10 seconds... 00:20:46.438 00:20:46.438 Latency(us) 00:20:46.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.438 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:46.438 Verification LBA range: start 0x0 length 0x1000 00:20:46.438 Nvme1n1 : 10.01 13103.57 102.37 0.00 0.00 9743.05 1132.63 19033.93 00:20:46.438 =================================================================================================================== 00:20:46.438 Total : 13103.57 102.37 0.00 0.00 9743.05 1132.63 19033.93 00:20:46.697 10:15:59 -- target/zcopy.sh@39 -- # perfpid=319097 00:20:46.697 10:15:59 -- target/zcopy.sh@41 -- # xtrace_disable 00:20:46.697 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:20:46.697 10:15:59 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:46.697 10:15:59 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:46.697 10:15:59 -- nvmf/common.sh@520 -- # config=() 00:20:46.698 10:15:59 -- nvmf/common.sh@520 -- # local subsystem config 00:20:46.698 10:15:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:46.698 10:15:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:46.698 { 00:20:46.698 "params": { 00:20:46.698 "name": "Nvme$subsystem", 00:20:46.698 "trtype": "$TEST_TRANSPORT", 00:20:46.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.698 "adrfam": "ipv4", 00:20:46.698 "trsvcid": "$NVMF_PORT", 00:20:46.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.698 "hdgst": ${hdgst:-false}, 00:20:46.698 "ddgst": ${ddgst:-false} 00:20:46.698 }, 00:20:46.698 "method": "bdev_nvme_attach_controller" 00:20:46.698 } 00:20:46.698 EOF 00:20:46.698 )") 00:20:46.698 [2024-04-24 10:15:59.830875] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.830909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 10:15:59 -- nvmf/common.sh@542 -- # cat 00:20:46.698 10:15:59 -- nvmf/common.sh@544 -- # jq . 00:20:46.698 [2024-04-24 10:15:59.838863] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.838874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 10:15:59 -- nvmf/common.sh@545 -- # IFS=, 00:20:46.698 10:15:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:46.698 "params": { 00:20:46.698 "name": "Nvme1", 00:20:46.698 "trtype": "tcp", 00:20:46.698 "traddr": "10.0.0.2", 00:20:46.698 "adrfam": "ipv4", 00:20:46.698 "trsvcid": "4420", 00:20:46.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.698 "hdgst": false, 00:20:46.698 "ddgst": false 00:20:46.698 }, 00:20:46.698 "method": "bdev_nvme_attach_controller" 00:20:46.698 }' 00:20:46.698 [2024-04-24 10:15:59.846880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.846891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.854903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.854913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.862923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.862933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.867740] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:46.698 [2024-04-24 10:15:59.867780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319097 ] 00:20:46.698 [2024-04-24 10:15:59.870945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.870957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.878966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.878976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.886987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.886996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.698 [2024-04-24 10:15:59.895007] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.895017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.903028] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.903037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.911050] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.911059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.919076] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.919085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.920965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.698 [2024-04-24 10:15:59.927098] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.927108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.935117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.935128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.943134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.943143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.951154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.951164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.959177] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.959195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.967198] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.967213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.698 [2024-04-24 10:15:59.975220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.698 [2024-04-24 10:15:59.975231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:15:59.983241] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:15:59.983250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:15:59.991262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:15:59.991271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:15:59.995858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.958 [2024-04-24 10:15:59.999285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:15:59.999296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.007382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.007419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.015352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.015375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.023361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.023373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.031379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.031394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.039425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.039450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.047421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.047432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.055441] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.055453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.063462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.063472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.071482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.071491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.079517] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.079536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.087536] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.087552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.095560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.095581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.103577] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.103590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.111602] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.111614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.119623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.119635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.127640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.127649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.135684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.135700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.143687] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.143697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 Running I/O for 5 seconds... 00:20:46.958 [2024-04-24 10:16:00.151708] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.151717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.163029] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.163048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.170009] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.170029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.180146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.180165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.188665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.188683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.197092] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.197110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.205464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.205482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.214453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.214471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.223284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.958 [2024-04-24 10:16:00.223302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:46.958 [2024-04-24 10:16:00.231615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:46.959 [2024-04-24 10:16:00.231633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.240547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.240565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.249543] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.249564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.258446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.258469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.267111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.267129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.276225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.276243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.285279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.285297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.294534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.294552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.303371] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.303389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.312313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.312331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.320433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.320451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.329397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.329415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.338451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.338469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.347725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.347744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.357202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.357221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.365952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.365969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.374895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.374913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.383487] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.383506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.392709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.392728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.218 [2024-04-24 10:16:00.399457] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.218 [2024-04-24 10:16:00.399475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.409949] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.409967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.418822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.418840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.427372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.427391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.436090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.436109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.445228] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.445245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.453677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.453694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.462214] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.462231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.471568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.471585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.479914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.479931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.488401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.488418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.219 [2024-04-24 10:16:00.496683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.219 [2024-04-24 10:16:00.496701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.478 [2024-04-24 10:16:00.505572] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.478 [2024-04-24 10:16:00.505591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.478 [2024-04-24 10:16:00.514577] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.478 [2024-04-24 10:16:00.514595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.478 [2024-04-24 10:16:00.523280] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.478 [2024-04-24 10:16:00.523298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.478 [2024-04-24 10:16:00.532537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.478 [2024-04-24 10:16:00.532554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.478 [2024-04-24 10:16:00.541151] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.478 [2024-04-24 10:16:00.541168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.478 [2024-04-24 10:16:00.549869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.478 [2024-04-24 10:16:00.549887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.478 [2024-04-24 10:16:00.558421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.558439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.567291] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.567309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.576110] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.576128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.584443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.584461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.593610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.593628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.602917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.602936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.612170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.612188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.620959] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.620976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.629418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.629435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.638364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.638381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.647193] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.647210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.656116] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.656134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.664882] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.664899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.673210] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.673228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.681811] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.681829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.690114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.690132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.699125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.699144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.707794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.707812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.716694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.716711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.725729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.725747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.734759] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.734777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.744091] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.744109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.479 [2024-04-24 10:16:00.752457] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.479 [2024-04-24 10:16:00.752474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.745 [2024-04-24 10:16:00.760517] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.745 [2024-04-24 10:16:00.760534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.769334] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.769352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.777321] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.777338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.786335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.786353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.794937] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.794954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.803103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.803123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.812792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.812811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.819910] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.819927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.830334] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.830352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.838619] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.838637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.847675] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.847693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.856031] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.856050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.864962] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.864981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.873594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.873614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.882213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.882232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.891193] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.891212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.899663] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.899680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.908510] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.908529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.916898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.916916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.925476] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.925494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.934057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.934081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.942366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.942384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.951511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.951531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.960936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.960965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.969996] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.970014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.979002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.979021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.988056] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.988079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:00.996785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:00.996802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:01.005620] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:01.005638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:47.746 [2024-04-24 10:16:01.013922] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:47.746 [2024-04-24 10:16:01.013941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.022179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.022198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.030345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.030364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.038423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.038441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.047317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.047335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.056202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.056220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.064713] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.064732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.072896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.072915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.081434] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.081456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.089551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.089570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.098219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.098239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.107341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.107361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.116204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.116223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.125338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.125357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.134277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.134297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.143618] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.143638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.152225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.152244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.161043] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.161062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.169829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.169848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.177939] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.177958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.186736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.186756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.195561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.195581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.204472] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.204491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.005 [2024-04-24 10:16:01.212668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.005 [2024-04-24 10:16:01.212688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.006 [2024-04-24 10:16:01.221190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.006 [2024-04-24 10:16:01.221209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.006 [2024-04-24 10:16:01.229622] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.006 [2024-04-24 10:16:01.229641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.006 [2024-04-24 10:16:01.237893] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.006 [2024-04-24 10:16:01.237912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.006 [2024-04-24 10:16:01.246704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.006 [2024-04-24 10:16:01.246731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.006 [2024-04-24 10:16:01.255745] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.006 [2024-04-24 10:16:01.255764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.006 [2024-04-24 10:16:01.264681] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.006 [2024-04-24 10:16:01.264700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.006 [2024-04-24 10:16:01.272878] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.006 [2024-04-24 10:16:01.272896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.006 [2024-04-24 10:16:01.281809] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.006 [2024-04-24 10:16:01.281827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.290707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.290727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.299915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.299934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.308190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.308211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.316839] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.316858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.325817] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.325836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.334170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.334189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.343024] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.343042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.351244] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.351263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.360217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.360236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.368373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.368392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.376848] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.376867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.385433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.385451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.394161] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.394179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.402862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.402881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.411775] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.411797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.420393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.420411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.429859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.429877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.438343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.438362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.447114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.447133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.456559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.456578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.464814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.464833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.473771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.473790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.482696] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.482715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.491443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.491461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.499911] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.265 [2024-04-24 10:16:01.499929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.265 [2024-04-24 10:16:01.509243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.266 [2024-04-24 10:16:01.509261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.266 [2024-04-24 10:16:01.517605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.266 [2024-04-24 10:16:01.517622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.266 [2024-04-24 10:16:01.525918] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.266 [2024-04-24 10:16:01.525936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.266 [2024-04-24 10:16:01.534778] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.266 [2024-04-24 10:16:01.534797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.266 [2024-04-24 10:16:01.543977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.266 [2024-04-24 10:16:01.543996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.552912] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.552930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.561226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.561243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.570278] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.570296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.579199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.579221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.588100] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.588119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.597212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.597230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.605593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.605611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.614447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.614465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.623306] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.623324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.632144] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.632162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.641436] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.641454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.650475] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.650493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.659306] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.659324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.667586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.667603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.676283] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.676301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.684516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.684533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.693412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.693430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.702739] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.702757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.711741] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.711759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.720770] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.720788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.729616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.729635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.738998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.739017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.747896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.747914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.756835] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.756853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.765050] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.765068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.774141] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.774159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.782619] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.525 [2024-04-24 10:16:01.782637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.525 [2024-04-24 10:16:01.791273] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.526 [2024-04-24 10:16:01.791290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.526 [2024-04-24 10:16:01.800206] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.526 [2024-04-24 10:16:01.800224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.809793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.809811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.819032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.819050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.827785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.827804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.836206] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.836224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.845144] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.845161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.853789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.853807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.862575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.862593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.871036] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.871056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.879140] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.879157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.888026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.888044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.896653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.896670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.905813] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.905831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.914207] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.914225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.923147] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.923165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.932092] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.932109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.941078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.941096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.950108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.950128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.958900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.958919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.967774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.785 [2024-04-24 10:16:01.967793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.785 [2024-04-24 10:16:01.977052] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:01.977076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.786 [2024-04-24 10:16:01.986055] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:01.986080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.786 [2024-04-24 10:16:01.994307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:01.994325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.786 [2024-04-24 10:16:02.003192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:02.003210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.786 [2024-04-24 10:16:02.012770] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:02.012789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.786 [2024-04-24 10:16:02.021279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:02.021297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.786 [2024-04-24 10:16:02.030057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:02.030081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.786 [2024-04-24 10:16:02.038868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:02.038886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.786 [2024-04-24 10:16:02.047258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:02.047276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:48.786 [2024-04-24 10:16:02.055897] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:48.786 [2024-04-24 10:16:02.055914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.045 [2024-04-24 10:16:02.064849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.045 [2024-04-24 10:16:02.064868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.045 [2024-04-24 10:16:02.073493] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.045 [2024-04-24 10:16:02.073511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.045 [2024-04-24 10:16:02.081644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.081662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.090113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.090131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.098716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.098734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.107728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.107745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.116067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.116091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.124587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.124605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.132782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.132801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.141764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.141782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.150360] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.150377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.158612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.158630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.167296] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.167315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.176036] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.176055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.184522] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.184540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.232559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.232578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.241625] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.241642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.249927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.249946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.258537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.258555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.267315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.267333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.275681] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.275699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.284242] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.284259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.293677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.293695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.302528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.302546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.311622] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.311640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.046 [2024-04-24 10:16:02.320481] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.046 [2024-04-24 10:16:02.320499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.328818] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.328836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.337466] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.337484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.346568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.346586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.355403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.355421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.364808] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.364827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.373034] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.373052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.381768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.381786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.389995] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.390014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.396652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.396670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.406765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.406783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.415118] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.415136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.424170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.424188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.432541] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.432560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.440730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.440751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.450008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.450026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.458014] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.458033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.466617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.466636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.474882] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.474900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.484211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.484230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.492847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.492866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.502200] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.502219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.510806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.510825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.519126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.519144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.527911] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.527931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.537025] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.537044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.546053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.546077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.555047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.555066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.563607] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.563626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.572591] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.572609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.304 [2024-04-24 10:16:02.581397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.304 [2024-04-24 10:16:02.581416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.590240] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.590258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.598790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.598808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.607656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.607679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.616546] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.616564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.625955] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.625973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.634176] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.634194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.643272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.643290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.651936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.651955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.660411] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.660429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.668672] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.668690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.677016] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.677035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.686008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.686027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.694357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.694375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.702721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.702740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.711196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.711213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.719834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.719852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.728673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.728691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.737650] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.737669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.746749] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.746769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.755849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.755867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.764197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.764215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.772996] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.773018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.781705] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.781723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.790056] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.790080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.799179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.799198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.807538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.807561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.816688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.816706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.825579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.825598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.834529] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.834548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.564 [2024-04-24 10:16:02.843008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.564 [2024-04-24 10:16:02.843027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.824 [2024-04-24 10:16:02.851383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.824 [2024-04-24 10:16:02.851402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.824 [2024-04-24 10:16:02.859927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.859945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.868603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.868625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.877607] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.877625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.886128] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.886147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.894649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.894667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.902792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.902810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.911375] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.911393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.919681] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.919700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.927067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.927089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.936617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.936639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.945087] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.945106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.953426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.953443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.962299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.962317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.971046] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.971063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.979546] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.979564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.988016] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.988033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:02.996770] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:02.996789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.005789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.005807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.014543] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.014560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.023401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.023418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.032120] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.032138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.041597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.041617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.050020] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.050038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.059042] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.059061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.067731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.067750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.076840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.076859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.085856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.085875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.094328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.094346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:49.825 [2024-04-24 10:16:03.102829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:49.825 [2024-04-24 10:16:03.102847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.131 [2024-04-24 10:16:03.111935] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.131 [2024-04-24 10:16:03.111953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.131 [2024-04-24 10:16:03.120381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.131 [2024-04-24 10:16:03.120399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.131 [2024-04-24 10:16:03.128930] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.131 [2024-04-24 10:16:03.128949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.131 [2024-04-24 10:16:03.137464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.131 [2024-04-24 10:16:03.137482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.131 [2024-04-24 10:16:03.145816] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.131 [2024-04-24 10:16:03.145834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.154836] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.154854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.163841] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.163859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.172656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.172674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.180829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.180847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.189568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.189585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.198478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.198497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.206740] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.206758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.214869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.214889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.224409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.224429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.233084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.233103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.241806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.241825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.249872] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.249890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.258884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.258902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.267690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.267707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.276546] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.276564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.285512] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.285529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.293837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.293855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.302414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.302432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.311305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.311323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.320369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.320387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.329250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.329268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.337965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.337982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.346758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.346777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.355661] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.355679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.364539] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.364557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.373486] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.373504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.382306] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.382324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.132 [2024-04-24 10:16:03.390951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.132 [2024-04-24 10:16:03.390969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.400583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.400602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.409544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.409563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.418100] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.418118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.427211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.427229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.436121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.436139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.445224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.445243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.453971] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.453990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.463464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.463482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.472189] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.472207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.481521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.481539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.490722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.490740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.499456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.499475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.508494] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.508513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.517338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.517355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.526285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.526302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.535141] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.535159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.544188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.544206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.553606] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.553623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.562049] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.562067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.570343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.570361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.579087] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.579105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.587842] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.587859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.596884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.596901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.605653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.605670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.614479] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.614497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.623435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.623454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.632027] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.632045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.641272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.641290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.649590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.649608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.657790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.657808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.667026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.667044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.676412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.676430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.684917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.684934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.693952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.693970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.428 [2024-04-24 10:16:03.702278] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.428 [2024-04-24 10:16:03.702297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.711788] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.711807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.720507] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.720525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.729408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.729427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.737925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.737943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.746682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.746701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.755021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.755039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.764002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.764027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.772937] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.772955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.781932] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.781950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.790855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.790873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.799067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.799089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.807829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.807847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.816037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.816055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.823916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.823933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.838059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.838083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.846736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.846755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.855499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.855517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.863891] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.863918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.873370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.873388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.881889] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.881907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.890744] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.890763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.899344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.899363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.907779] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.907799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.916048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.916068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.924690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.924711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.933421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.933445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.942516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.942535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.951284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.951304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.688 [2024-04-24 10:16:03.960172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.688 [2024-04-24 10:16:03.960191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:03.969237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:03.969256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:03.978053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:03.978079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:03.986978] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:03.986997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:03.995879] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:03.995898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.004121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.004139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.013227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.013246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.021433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.021451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.030534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.030555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.038986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.039004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.048099] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.048117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.056932] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.056951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.066002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.066021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.074739] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.074758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.083556] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.083576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.092635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.092653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.101626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.101649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.110526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.110545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.118831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.118848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.127886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.127905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.136349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.136368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.145462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.145480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.153572] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.153590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.162690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.162708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.948 [2024-04-24 10:16:04.171513] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.948 [2024-04-24 10:16:04.171531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.949 [2024-04-24 10:16:04.180262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.949 [2024-04-24 10:16:04.180281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.949 [2024-04-24 10:16:04.189167] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.949 [2024-04-24 10:16:04.189185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.949 [2024-04-24 10:16:04.198076] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.949 [2024-04-24 10:16:04.198095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.949 [2024-04-24 10:16:04.207079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.949 [2024-04-24 10:16:04.207098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.949 [2024-04-24 10:16:04.216111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.949 [2024-04-24 10:16:04.216130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:50.949 [2024-04-24 10:16:04.224411] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:50.949 [2024-04-24 10:16:04.224429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.234147] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.234178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.242734] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.242752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.251790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.251809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.260563] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.260581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.269561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.269583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.276187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.276205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.286865] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.286883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.295125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.295142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.304214] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.304231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.312716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.312734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.321189] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.321207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.330076] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.330093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.339030] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.339048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.347521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.347540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.356599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.356617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.365321] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.365350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.374552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.374571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.383390] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.383409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.391999] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.392017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.400301] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.400319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.408467] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.408486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.417085] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.417103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.425987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.426005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.434419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.434437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.442702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.442720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.451591] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.451609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.460006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.460024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.468866] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.468884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.477737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.477755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.208 [2024-04-24 10:16:04.486922] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.208 [2024-04-24 10:16:04.486941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.467 [2024-04-24 10:16:04.495302] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.467 [2024-04-24 10:16:04.495321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.467 [2024-04-24 10:16:04.504135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.467 [2024-04-24 10:16:04.504153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.467 [2024-04-24 10:16:04.512745] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.467 [2024-04-24 10:16:04.512763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.467 [2024-04-24 10:16:04.521551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.467 [2024-04-24 10:16:04.521568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.467 [2024-04-24 10:16:04.529819] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.467 [2024-04-24 10:16:04.529836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.538171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.538190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.546874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.546893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.553437] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.553455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.564108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.564125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.572762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.572780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.580997] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.581015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.589997] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.590015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.598837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.598855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.607488] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.607506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.615827] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.615845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.624511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.624529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.633316] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.633334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.642459] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.642478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.651145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.651165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.660144] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.660162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.669115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.669133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.677882] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.677901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.686365] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.686385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.695269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.695288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.703415] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.703433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.711701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.711719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.720574] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.720592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.728866] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.728884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.468 [2024-04-24 10:16:04.738363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.468 [2024-04-24 10:16:04.738382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.726 [2024-04-24 10:16:04.746879] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.746898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.755860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.755878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.764728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.764746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.773021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.773039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.781880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.781897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.790228] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.790245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.798667] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.798685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.807703] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.807721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.815959] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.815977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.825055] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.825080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.834243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.834262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.843831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.843850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.852116] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.852135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.860487] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.860506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.868900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.868918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.877861] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.877880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.886104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.886122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.895082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.895100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.903782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.903800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.912686] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.912703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.921462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.921480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.930870] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.930888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.939403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.939421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.947710] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.947729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.955952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.955970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.964510] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.964530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.972792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.972810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.981653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.981672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.990653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.990672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.727 [2024-04-24 10:16:04.999458] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.727 [2024-04-24 10:16:04.999476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.007902] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.007920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.016682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.016700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.024941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.024959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.033355] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.033373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.042353] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.042371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.051158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.051176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.060135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.060152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.068847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.068864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.077144] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.077162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.085405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.085427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.093639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.093656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.102486] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.102504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.110968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.110986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.119755] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.119773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.128515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.128532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.136731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.136748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.145528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.145545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.154423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.154441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 00:20:51.987 Latency(us) 00:20:51.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.987 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:51.987 Nvme1n1 : 5.00 17175.69 134.19 0.00 0.00 7446.50 2436.23 49237.48 00:20:51.987 =================================================================================================================== 00:20:51.987 Total : 17175.69 134.19 0.00 0.00 7446.50 2436.23 49237.48 00:20:51.987 [2024-04-24 10:16:05.162775] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.162792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.168751] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.168767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.176774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.176788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.184794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.184804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.192826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.192842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.200842] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.200853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.208862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.208873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.216883] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.216899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.224904] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.224914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.232927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.232937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.240948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.240958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.248973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.248986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.256995] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.257007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:51.987 [2024-04-24 10:16:05.265017] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:51.987 [2024-04-24 10:16:05.265028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.273036] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.273045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.281057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.281067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.289081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.289090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.297106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.297118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.305126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.305137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.313146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.313157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.321175] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.321186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.329194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.329206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.337214] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.337224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.345237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.345248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.353262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.353273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 [2024-04-24 10:16:05.361280] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:52.247 [2024-04-24 10:16:05.361291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:52.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (319097) - No such process 00:20:52.247 10:16:05 -- target/zcopy.sh@49 -- # wait 319097 00:20:52.247 10:16:05 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:52.247 10:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:52.247 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:52.247 10:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:52.247 10:16:05 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:52.247 10:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:52.247 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:52.247 delay0 00:20:52.247 10:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:52.247 10:16:05 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:52.247 10:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:52.247 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:20:52.247 10:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:52.247 10:16:05 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:52.248 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.507 [2024-04-24 10:16:05.531205] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:59.075 [2024-04-24 10:16:11.648963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21315f0 is same with the state(5) to be set 00:20:59.075 [2024-04-24 10:16:11.649006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21315f0 is same with the state(5) to be set 00:20:59.075 Initializing NVMe Controllers 00:20:59.075 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:59.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:59.075 Initialization complete. Launching workers. 00:20:59.075 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 87 00:20:59.075 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 364, failed to submit 43 00:20:59.075 success 187, unsuccess 177, failed 0 00:20:59.075 10:16:11 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:59.075 10:16:11 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:59.075 10:16:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:59.075 10:16:11 -- nvmf/common.sh@116 -- # sync 00:20:59.075 10:16:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:59.075 10:16:11 -- nvmf/common.sh@119 -- # set +e 00:20:59.075 10:16:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:59.075 10:16:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:59.075 rmmod nvme_tcp 00:20:59.075 rmmod nvme_fabrics 00:20:59.075 rmmod nvme_keyring 00:20:59.075 10:16:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:59.075 10:16:11 -- nvmf/common.sh@123 -- # set -e 00:20:59.075 10:16:11 -- nvmf/common.sh@124 -- # return 0 00:20:59.075 10:16:11 -- nvmf/common.sh@477 -- # '[' -n 317132 ']' 00:20:59.075 10:16:11 -- nvmf/common.sh@478 -- # killprocess 317132 00:20:59.075 10:16:11 -- common/autotest_common.sh@926 -- # '[' -z 317132 ']' 00:20:59.075 10:16:11 -- common/autotest_common.sh@930 -- # kill -0 317132 00:20:59.075 10:16:11 -- common/autotest_common.sh@931 -- # uname 00:20:59.075 10:16:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.075 10:16:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 317132 00:20:59.075 10:16:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:59.075 10:16:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:59.075 10:16:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 317132' 00:20:59.075 killing process with pid 317132 00:20:59.075 10:16:11 -- common/autotest_common.sh@945 -- # kill 317132 00:20:59.075 10:16:11 -- common/autotest_common.sh@950 -- # wait 317132 00:20:59.075 10:16:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:59.075 10:16:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:59.075 10:16:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:59.075 10:16:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.075 10:16:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:59.075 10:16:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.075 10:16:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.075 10:16:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.981 10:16:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:00.981 00:21:00.981 real 0m31.508s 00:21:00.981 user 0m42.829s 00:21:00.981 sys 0m10.606s 00:21:00.981 10:16:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.981 10:16:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.981 ************************************ 00:21:00.981 END TEST nvmf_zcopy 00:21:00.981 ************************************ 00:21:00.981 10:16:14 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:00.981 10:16:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:00.981 10:16:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:00.981 10:16:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.981 ************************************ 00:21:00.981 START TEST nvmf_nmic 00:21:00.981 ************************************ 00:21:00.981 10:16:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:00.981 * Looking for test storage... 00:21:00.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:00.981 10:16:14 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.981 10:16:14 -- nvmf/common.sh@7 -- # uname -s 00:21:00.981 10:16:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.981 10:16:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.981 10:16:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.981 10:16:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.981 10:16:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.981 10:16:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.981 10:16:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.981 10:16:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.981 10:16:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.981 10:16:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.981 10:16:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:00.981 10:16:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:00.981 10:16:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.981 10:16:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.981 10:16:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.981 10:16:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.981 10:16:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.981 10:16:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.981 10:16:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.982 10:16:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.982 10:16:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.982 10:16:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.982 10:16:14 -- paths/export.sh@5 -- # export PATH 00:21:00.982 10:16:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.982 10:16:14 -- nvmf/common.sh@46 -- # : 0 00:21:00.982 10:16:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:00.982 10:16:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:00.982 10:16:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:00.982 10:16:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.982 10:16:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.982 10:16:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:00.982 10:16:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:00.982 10:16:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:00.982 10:16:14 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:00.982 10:16:14 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:00.982 10:16:14 -- target/nmic.sh@14 -- # nvmftestinit 00:21:00.982 10:16:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:00.982 10:16:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.982 10:16:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:00.982 10:16:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:00.982 10:16:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:00.982 10:16:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.982 10:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.982 10:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.982 10:16:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:00.982 10:16:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:00.982 10:16:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:00.982 10:16:14 -- common/autotest_common.sh@10 -- # set +x 00:21:06.257 10:16:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:06.257 10:16:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:06.257 10:16:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:06.257 10:16:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:06.257 10:16:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:06.257 10:16:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:06.257 10:16:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:06.257 10:16:19 -- nvmf/common.sh@294 -- # net_devs=() 00:21:06.257 10:16:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:06.257 10:16:19 -- nvmf/common.sh@295 -- # e810=() 00:21:06.257 10:16:19 -- nvmf/common.sh@295 -- # local -ga e810 00:21:06.257 10:16:19 -- nvmf/common.sh@296 -- # x722=() 00:21:06.257 10:16:19 -- nvmf/common.sh@296 -- # local -ga x722 00:21:06.257 10:16:19 -- nvmf/common.sh@297 -- # mlx=() 00:21:06.257 10:16:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:06.257 10:16:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.257 10:16:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:06.257 10:16:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:06.257 10:16:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:06.257 10:16:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:06.257 10:16:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:06.257 10:16:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:06.257 10:16:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:06.257 10:16:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:06.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:06.258 10:16:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:06.258 10:16:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:06.258 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:06.258 10:16:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:06.258 10:16:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:06.258 10:16:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.258 10:16:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:06.258 10:16:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.258 10:16:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:06.258 Found net devices under 0000:86:00.0: cvl_0_0 00:21:06.258 10:16:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.258 10:16:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:06.258 10:16:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.258 10:16:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:06.258 10:16:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.258 10:16:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:06.258 Found net devices under 0000:86:00.1: cvl_0_1 00:21:06.258 10:16:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.258 10:16:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:06.258 10:16:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:06.258 10:16:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:06.258 10:16:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.258 10:16:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.258 10:16:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.258 10:16:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:06.258 10:16:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.258 10:16:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.258 10:16:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:06.258 10:16:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.258 10:16:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.258 10:16:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:06.258 10:16:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:06.258 10:16:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.258 10:16:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.258 10:16:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.258 10:16:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.258 10:16:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:06.258 10:16:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.258 10:16:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.258 10:16:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.258 10:16:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:06.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:21:06.258 00:21:06.258 --- 10.0.0.2 ping statistics --- 00:21:06.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.258 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:21:06.258 10:16:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:21:06.258 00:21:06.258 --- 10.0.0.1 ping statistics --- 00:21:06.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.258 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:21:06.258 10:16:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.258 10:16:19 -- nvmf/common.sh@410 -- # return 0 00:21:06.258 10:16:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:06.258 10:16:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.258 10:16:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:06.258 10:16:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.258 10:16:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:06.258 10:16:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:06.258 10:16:19 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:06.258 10:16:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:06.258 10:16:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:06.258 10:16:19 -- common/autotest_common.sh@10 -- # set +x 00:21:06.258 10:16:19 -- nvmf/common.sh@469 -- # nvmfpid=324589 00:21:06.258 10:16:19 -- nvmf/common.sh@470 -- # waitforlisten 324589 00:21:06.258 10:16:19 -- common/autotest_common.sh@819 -- # '[' -z 324589 ']' 00:21:06.258 10:16:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.258 10:16:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:06.258 10:16:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.258 10:16:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:06.258 10:16:19 -- common/autotest_common.sh@10 -- # set +x 00:21:06.258 10:16:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:06.258 [2024-04-24 10:16:19.454997] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:06.258 [2024-04-24 10:16:19.455043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.258 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.258 [2024-04-24 10:16:19.512759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.517 [2024-04-24 10:16:19.593301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:06.517 [2024-04-24 10:16:19.593403] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.517 [2024-04-24 10:16:19.593411] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.517 [2024-04-24 10:16:19.593417] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.517 [2024-04-24 10:16:19.593455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.517 [2024-04-24 10:16:19.593470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.517 [2024-04-24 10:16:19.593561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.517 [2024-04-24 10:16:19.593562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.085 10:16:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:07.085 10:16:20 -- common/autotest_common.sh@852 -- # return 0 00:21:07.085 10:16:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:07.085 10:16:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:07.085 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.085 10:16:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.085 10:16:20 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.085 10:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.085 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.085 [2024-04-24 10:16:20.300378] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.085 10:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.085 10:16:20 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:07.085 10:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.085 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.085 Malloc0 00:21:07.085 10:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.085 10:16:20 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:07.085 10:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.085 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.085 10:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.085 10:16:20 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:07.085 10:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.085 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.085 10:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.085 10:16:20 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.085 10:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.085 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.085 [2024-04-24 10:16:20.344212] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.085 10:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.085 10:16:20 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:07.085 test case1: single bdev can't be used in multiple subsystems 00:21:07.085 10:16:20 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:07.085 10:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.085 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.085 10:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.085 10:16:20 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:07.085 10:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.085 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.344 10:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.344 10:16:20 -- target/nmic.sh@28 -- # nmic_status=0 00:21:07.344 10:16:20 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:07.344 10:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.344 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.344 [2024-04-24 10:16:20.368166] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:07.344 [2024-04-24 10:16:20.368185] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:07.344 [2024-04-24 10:16:20.368193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:07.344 request: 00:21:07.344 { 00:21:07.344 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:07.344 "namespace": { 00:21:07.344 "bdev_name": "Malloc0" 00:21:07.344 }, 00:21:07.344 "method": "nvmf_subsystem_add_ns", 00:21:07.344 "req_id": 1 00:21:07.344 } 00:21:07.344 Got JSON-RPC error response 00:21:07.344 response: 00:21:07.344 { 00:21:07.344 "code": -32602, 00:21:07.344 "message": "Invalid parameters" 00:21:07.344 } 00:21:07.344 10:16:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:07.344 10:16:20 -- target/nmic.sh@29 -- # nmic_status=1 00:21:07.344 10:16:20 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:07.344 10:16:20 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:07.344 Adding namespace failed - expected result. 00:21:07.344 10:16:20 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:07.344 test case2: host connect to nvmf target in multiple paths 00:21:07.344 10:16:20 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:07.344 10:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.344 10:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:07.344 [2024-04-24 10:16:20.380290] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:07.344 10:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.344 10:16:20 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:08.281 10:16:21 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:21:09.668 10:16:22 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:09.668 10:16:22 -- common/autotest_common.sh@1177 -- # local i=0 00:21:09.668 10:16:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:09.668 10:16:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:09.668 10:16:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:11.569 10:16:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:11.569 10:16:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:11.569 10:16:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:21:11.569 10:16:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:11.569 10:16:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:11.569 10:16:24 -- common/autotest_common.sh@1187 -- # return 0 00:21:11.569 10:16:24 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:11.569 [global] 00:21:11.569 thread=1 00:21:11.569 invalidate=1 00:21:11.569 rw=write 00:21:11.569 time_based=1 00:21:11.569 runtime=1 00:21:11.569 ioengine=libaio 00:21:11.569 direct=1 00:21:11.569 bs=4096 00:21:11.569 iodepth=1 00:21:11.569 norandommap=0 00:21:11.569 numjobs=1 00:21:11.569 00:21:11.569 verify_dump=1 00:21:11.569 verify_backlog=512 00:21:11.569 verify_state_save=0 00:21:11.569 do_verify=1 00:21:11.569 verify=crc32c-intel 00:21:11.569 [job0] 00:21:11.569 filename=/dev/nvme0n1 00:21:11.569 Could not set queue depth (nvme0n1) 00:21:11.828 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:11.828 fio-3.35 00:21:11.828 Starting 1 thread 00:21:13.205 00:21:13.205 job0: (groupid=0, jobs=1): err= 0: pid=325638: Wed Apr 24 10:16:26 2024 00:21:13.205 read: IOPS=1586, BW=6346KiB/s (6498kB/s)(6352KiB/1001msec) 00:21:13.205 slat (nsec): min=6390, max=26569, avg=7269.16, stdev=742.45 00:21:13.205 clat (usec): min=207, max=542, avg=345.78, stdev=55.08 00:21:13.205 lat (usec): min=214, max=549, avg=353.05, stdev=55.11 00:21:13.205 clat percentiles (usec): 00:21:13.205 | 1.00th=[ 217], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 306], 00:21:13.205 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 351], 00:21:13.205 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 424], 95.00th=[ 453], 00:21:13.205 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 519], 99.95th=[ 545], 00:21:13.205 | 99.99th=[ 545] 00:21:13.205 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:13.205 slat (nsec): min=9188, max=40760, avg=10600.94, stdev=1634.64 00:21:13.205 clat (usec): min=162, max=418, avg=200.15, stdev=31.00 00:21:13.205 lat (usec): min=172, max=459, avg=210.75, stdev=31.17 00:21:13.205 clat percentiles (usec): 00:21:13.205 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 176], 00:21:13.205 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 196], 60.00th=[ 204], 00:21:13.205 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 260], 00:21:13.205 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 371], 99.95th=[ 375], 00:21:13.205 | 99.99th=[ 420] 00:21:13.205 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:21:13.205 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:13.205 lat (usec) : 250=54.48%, 500=45.21%, 750=0.30% 00:21:13.205 cpu : usr=2.60%, sys=2.70%, ctx=3636, majf=0, minf=2 00:21:13.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:13.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.205 issued rwts: total=1588,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:13.205 00:21:13.205 Run status group 0 (all jobs): 00:21:13.205 READ: bw=6346KiB/s (6498kB/s), 6346KiB/s-6346KiB/s (6498kB/s-6498kB/s), io=6352KiB (6504kB), run=1001-1001msec 00:21:13.205 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:21:13.205 00:21:13.205 Disk stats (read/write): 00:21:13.205 nvme0n1: ios=1586/1596, merge=0/0, ticks=555/317, in_queue=872, util=91.58% 00:21:13.205 10:16:26 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:13.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:13.205 10:16:26 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:13.205 10:16:26 -- common/autotest_common.sh@1198 -- # local i=0 00:21:13.205 10:16:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:13.205 10:16:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:13.205 10:16:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:13.205 10:16:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:13.205 10:16:26 -- common/autotest_common.sh@1210 -- # return 0 00:21:13.205 10:16:26 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:13.205 10:16:26 -- target/nmic.sh@53 -- # nvmftestfini 00:21:13.205 10:16:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:13.205 10:16:26 -- nvmf/common.sh@116 -- # sync 00:21:13.206 10:16:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:13.206 10:16:26 -- nvmf/common.sh@119 -- # set +e 00:21:13.206 10:16:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:13.206 10:16:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:13.206 rmmod nvme_tcp 00:21:13.206 rmmod nvme_fabrics 00:21:13.206 rmmod nvme_keyring 00:21:13.206 10:16:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:13.206 10:16:26 -- nvmf/common.sh@123 -- # set -e 00:21:13.206 10:16:26 -- nvmf/common.sh@124 -- # return 0 00:21:13.206 10:16:26 -- nvmf/common.sh@477 -- # '[' -n 324589 ']' 00:21:13.206 10:16:26 -- nvmf/common.sh@478 -- # killprocess 324589 00:21:13.206 10:16:26 -- common/autotest_common.sh@926 -- # '[' -z 324589 ']' 00:21:13.206 10:16:26 -- common/autotest_common.sh@930 -- # kill -0 324589 00:21:13.206 10:16:26 -- common/autotest_common.sh@931 -- # uname 00:21:13.206 10:16:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:13.206 10:16:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 324589 00:21:13.206 10:16:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:13.206 10:16:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:13.206 10:16:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 324589' 00:21:13.206 killing process with pid 324589 00:21:13.206 10:16:26 -- common/autotest_common.sh@945 -- # kill 324589 00:21:13.206 10:16:26 -- common/autotest_common.sh@950 -- # wait 324589 00:21:13.465 10:16:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:13.465 10:16:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:13.465 10:16:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:13.465 10:16:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.465 10:16:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:13.465 10:16:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.465 10:16:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.465 10:16:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.003 10:16:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:16.003 00:21:16.003 real 0m14.628s 00:21:16.003 user 0m34.790s 00:21:16.003 sys 0m4.707s 00:21:16.003 10:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:16.003 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:21:16.003 ************************************ 00:21:16.003 END TEST nvmf_nmic 00:21:16.003 ************************************ 00:21:16.003 10:16:28 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:16.003 10:16:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:16.003 10:16:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:16.003 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:21:16.003 ************************************ 00:21:16.003 START TEST nvmf_fio_target 00:21:16.003 ************************************ 00:21:16.003 10:16:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:16.003 * Looking for test storage... 00:21:16.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.003 10:16:28 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.003 10:16:28 -- nvmf/common.sh@7 -- # uname -s 00:21:16.003 10:16:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.003 10:16:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.003 10:16:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.003 10:16:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.003 10:16:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.003 10:16:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.003 10:16:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.003 10:16:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.004 10:16:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.004 10:16:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.004 10:16:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.004 10:16:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.004 10:16:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.004 10:16:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.004 10:16:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.004 10:16:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.004 10:16:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.004 10:16:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.004 10:16:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.004 10:16:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.004 10:16:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.004 10:16:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.004 10:16:28 -- paths/export.sh@5 -- # export PATH 00:21:16.004 10:16:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.004 10:16:28 -- nvmf/common.sh@46 -- # : 0 00:21:16.004 10:16:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:16.004 10:16:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:16.004 10:16:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:16.004 10:16:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.004 10:16:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.004 10:16:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:16.004 10:16:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:16.004 10:16:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:16.004 10:16:28 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:16.004 10:16:28 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:16.004 10:16:28 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:16.004 10:16:28 -- target/fio.sh@16 -- # nvmftestinit 00:21:16.004 10:16:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:16.004 10:16:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.004 10:16:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:16.004 10:16:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:16.004 10:16:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:16.004 10:16:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.004 10:16:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.004 10:16:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.004 10:16:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:16.004 10:16:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:16.004 10:16:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:16.004 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:21:21.279 10:16:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:21.279 10:16:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:21.279 10:16:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:21.279 10:16:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:21.279 10:16:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:21.279 10:16:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:21.279 10:16:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:21.279 10:16:34 -- nvmf/common.sh@294 -- # net_devs=() 00:21:21.279 10:16:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:21.279 10:16:34 -- nvmf/common.sh@295 -- # e810=() 00:21:21.279 10:16:34 -- nvmf/common.sh@295 -- # local -ga e810 00:21:21.279 10:16:34 -- nvmf/common.sh@296 -- # x722=() 00:21:21.279 10:16:34 -- nvmf/common.sh@296 -- # local -ga x722 00:21:21.279 10:16:34 -- nvmf/common.sh@297 -- # mlx=() 00:21:21.279 10:16:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:21.279 10:16:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.279 10:16:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:21.279 10:16:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:21.279 10:16:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:21.279 10:16:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:21.279 10:16:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:21.279 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:21.279 10:16:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:21.279 10:16:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:21.279 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:21.279 10:16:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:21.279 10:16:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:21.279 10:16:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.279 10:16:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:21.279 10:16:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.279 10:16:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:21.279 Found net devices under 0000:86:00.0: cvl_0_0 00:21:21.279 10:16:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.279 10:16:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:21.279 10:16:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.279 10:16:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:21.279 10:16:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.279 10:16:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:21.279 Found net devices under 0000:86:00.1: cvl_0_1 00:21:21.279 10:16:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.279 10:16:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:21.279 10:16:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:21.279 10:16:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:21.279 10:16:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:21.279 10:16:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.279 10:16:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.279 10:16:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.279 10:16:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:21.279 10:16:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.279 10:16:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.279 10:16:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:21.279 10:16:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.279 10:16:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.279 10:16:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:21.279 10:16:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:21.279 10:16:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.279 10:16:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.279 10:16:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.280 10:16:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.280 10:16:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:21.280 10:16:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.280 10:16:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.280 10:16:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.280 10:16:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:21.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:21:21.280 00:21:21.280 --- 10.0.0.2 ping statistics --- 00:21:21.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.280 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:21:21.280 10:16:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:21:21.280 00:21:21.280 --- 10.0.0.1 ping statistics --- 00:21:21.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.280 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:21:21.280 10:16:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.280 10:16:34 -- nvmf/common.sh@410 -- # return 0 00:21:21.280 10:16:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:21.280 10:16:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.280 10:16:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:21.280 10:16:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:21.280 10:16:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.280 10:16:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:21.280 10:16:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:21.280 10:16:34 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:21.280 10:16:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:21.280 10:16:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:21.280 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:21:21.280 10:16:34 -- nvmf/common.sh@469 -- # nvmfpid=329294 00:21:21.280 10:16:34 -- nvmf/common.sh@470 -- # waitforlisten 329294 00:21:21.280 10:16:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:21.280 10:16:34 -- common/autotest_common.sh@819 -- # '[' -z 329294 ']' 00:21:21.280 10:16:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.280 10:16:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:21.280 10:16:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.280 10:16:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:21.280 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:21:21.280 [2024-04-24 10:16:34.522828] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:21.280 [2024-04-24 10:16:34.522869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.280 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.539 [2024-04-24 10:16:34.584440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.539 [2024-04-24 10:16:34.667188] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:21.539 [2024-04-24 10:16:34.667300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.539 [2024-04-24 10:16:34.667310] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.539 [2024-04-24 10:16:34.667318] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.539 [2024-04-24 10:16:34.667358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.539 [2024-04-24 10:16:34.667375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.539 [2024-04-24 10:16:34.667461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.539 [2024-04-24 10:16:34.667462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.107 10:16:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:22.107 10:16:35 -- common/autotest_common.sh@852 -- # return 0 00:21:22.107 10:16:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:22.107 10:16:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:22.107 10:16:35 -- common/autotest_common.sh@10 -- # set +x 00:21:22.107 10:16:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.107 10:16:35 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:22.366 [2024-04-24 10:16:35.506852] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.366 10:16:35 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:22.625 10:16:35 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:22.625 10:16:35 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:22.885 10:16:35 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:22.885 10:16:35 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:22.885 10:16:36 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:22.885 10:16:36 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:23.144 10:16:36 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:23.144 10:16:36 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:23.402 10:16:36 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:23.402 10:16:36 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:23.402 10:16:36 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:23.660 10:16:36 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:23.660 10:16:36 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:23.919 10:16:37 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:23.919 10:16:37 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:24.178 10:16:37 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:24.178 10:16:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:24.178 10:16:37 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:24.436 10:16:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:24.436 10:16:37 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:24.695 10:16:37 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.695 [2024-04-24 10:16:37.916729] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.695 10:16:37 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:24.953 10:16:38 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:25.212 10:16:38 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:26.586 10:16:39 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:26.586 10:16:39 -- common/autotest_common.sh@1177 -- # local i=0 00:21:26.586 10:16:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:26.586 10:16:39 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:21:26.586 10:16:39 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:21:26.586 10:16:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:28.488 10:16:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:28.488 10:16:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:28.488 10:16:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:21:28.488 10:16:41 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:21:28.488 10:16:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:28.488 10:16:41 -- common/autotest_common.sh@1187 -- # return 0 00:21:28.488 10:16:41 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:28.488 [global] 00:21:28.488 thread=1 00:21:28.488 invalidate=1 00:21:28.488 rw=write 00:21:28.488 time_based=1 00:21:28.488 runtime=1 00:21:28.488 ioengine=libaio 00:21:28.488 direct=1 00:21:28.488 bs=4096 00:21:28.488 iodepth=1 00:21:28.488 norandommap=0 00:21:28.488 numjobs=1 00:21:28.488 00:21:28.488 verify_dump=1 00:21:28.488 verify_backlog=512 00:21:28.488 verify_state_save=0 00:21:28.488 do_verify=1 00:21:28.488 verify=crc32c-intel 00:21:28.488 [job0] 00:21:28.488 filename=/dev/nvme0n1 00:21:28.488 [job1] 00:21:28.488 filename=/dev/nvme0n2 00:21:28.488 [job2] 00:21:28.488 filename=/dev/nvme0n3 00:21:28.488 [job3] 00:21:28.488 filename=/dev/nvme0n4 00:21:28.488 Could not set queue depth (nvme0n1) 00:21:28.488 Could not set queue depth (nvme0n2) 00:21:28.488 Could not set queue depth (nvme0n3) 00:21:28.488 Could not set queue depth (nvme0n4) 00:21:28.746 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:28.746 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:28.746 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:28.746 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:28.746 fio-3.35 00:21:28.746 Starting 4 threads 00:21:30.124 00:21:30.124 job0: (groupid=0, jobs=1): err= 0: pid=330742: Wed Apr 24 10:16:43 2024 00:21:30.124 read: IOPS=1229, BW=4919KiB/s (5037kB/s)(4924KiB/1001msec) 00:21:30.124 slat (nsec): min=7107, max=34332, avg=8043.52, stdev=1329.15 00:21:30.124 clat (usec): min=401, max=799, avg=476.48, stdev=41.22 00:21:30.124 lat (usec): min=409, max=807, avg=484.52, stdev=41.23 00:21:30.124 clat percentiles (usec): 00:21:30.124 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 441], 00:21:30.124 | 30.00th=[ 449], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 486], 00:21:30.124 | 70.00th=[ 494], 80.00th=[ 506], 90.00th=[ 519], 95.00th=[ 537], 00:21:30.124 | 99.00th=[ 611], 99.50th=[ 685], 99.90th=[ 783], 99.95th=[ 799], 00:21:30.124 | 99.99th=[ 799] 00:21:30.124 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:21:30.124 slat (nsec): min=10292, max=46032, avg=11605.04, stdev=1916.23 00:21:30.124 clat (usec): min=207, max=453, avg=245.77, stdev=32.52 00:21:30.124 lat (usec): min=218, max=465, avg=257.37, stdev=32.65 00:21:30.124 clat percentiles (usec): 00:21:30.124 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:21:30.124 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:21:30.124 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:21:30.124 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 445], 99.95th=[ 453], 00:21:30.124 | 99.99th=[ 453] 00:21:30.124 bw ( KiB/s): min= 7488, max= 7488, per=37.54%, avg=7488.00, stdev= 0.00, samples=1 00:21:30.124 iops : min= 1872, max= 1872, avg=1872.00, stdev= 0.00, samples=1 00:21:30.124 lat (usec) : 250=37.19%, 500=52.22%, 750=10.52%, 1000=0.07% 00:21:30.124 cpu : usr=1.70%, sys=5.10%, ctx=2767, majf=0, minf=1 00:21:30.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:30.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.124 issued rwts: total=1231,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:30.124 job1: (groupid=0, jobs=1): err= 0: pid=330757: Wed Apr 24 10:16:43 2024 00:21:30.124 read: IOPS=1134, BW=4539KiB/s (4648kB/s)(4544KiB/1001msec) 00:21:30.124 slat (nsec): min=7105, max=30437, avg=8081.98, stdev=1060.17 00:21:30.124 clat (usec): min=380, max=599, avg=474.67, stdev=40.13 00:21:30.124 lat (usec): min=388, max=607, avg=482.75, stdev=40.15 00:21:30.124 clat percentiles (usec): 00:21:30.124 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 441], 00:21:30.124 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 482], 00:21:30.124 | 70.00th=[ 494], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 562], 00:21:30.124 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 603], 99.95th=[ 603], 00:21:30.124 | 99.99th=[ 603] 00:21:30.124 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:21:30.124 slat (nsec): min=10228, max=53832, avg=11843.03, stdev=2076.04 00:21:30.124 clat (usec): min=184, max=2326, avg=277.10, stdev=78.00 00:21:30.124 lat (usec): min=196, max=2340, avg=288.94, stdev=78.13 00:21:30.124 clat percentiles (usec): 00:21:30.124 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:21:30.124 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 255], 60.00th=[ 281], 00:21:30.124 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 371], 00:21:30.124 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 848], 99.95th=[ 2311], 00:21:30.124 | 99.99th=[ 2311] 00:21:30.124 bw ( KiB/s): min= 6952, max= 6952, per=34.85%, avg=6952.00, stdev= 0.00, samples=1 00:21:30.124 iops : min= 1738, max= 1738, avg=1738.00, stdev= 0.00, samples=1 00:21:30.124 lat (usec) : 250=26.65%, 500=62.95%, 750=10.33%, 1000=0.04% 00:21:30.124 lat (msec) : 4=0.04% 00:21:30.124 cpu : usr=2.60%, sys=4.00%, ctx=2672, majf=0, minf=1 00:21:30.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:30.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.124 issued rwts: total=1136,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:30.124 job2: (groupid=0, jobs=1): err= 0: pid=330775: Wed Apr 24 10:16:43 2024 00:21:30.124 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:21:30.124 slat (nsec): min=7246, max=20619, avg=8364.78, stdev=818.04 00:21:30.124 clat (usec): min=305, max=653, avg=381.48, stdev=48.64 00:21:30.124 lat (usec): min=313, max=663, avg=389.85, stdev=48.67 00:21:30.124 clat percentiles (usec): 00:21:30.124 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 347], 00:21:30.124 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 379], 00:21:30.124 | 70.00th=[ 392], 80.00th=[ 416], 90.00th=[ 453], 95.00th=[ 498], 00:21:30.124 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 594], 99.95th=[ 652], 00:21:30.124 | 99.99th=[ 652] 00:21:30.124 write: IOPS=1536, BW=6146KiB/s (6293kB/s)(6152KiB/1001msec); 0 zone resets 00:21:30.124 slat (nsec): min=8463, max=41331, avg=11915.85, stdev=1937.02 00:21:30.124 clat (usec): min=192, max=510, avg=242.50, stdev=44.14 00:21:30.124 lat (usec): min=205, max=550, avg=254.42, stdev=44.29 00:21:30.124 clat percentiles (usec): 00:21:30.124 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:21:30.124 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:21:30.124 | 70.00th=[ 243], 80.00th=[ 258], 90.00th=[ 306], 95.00th=[ 338], 00:21:30.125 | 99.00th=[ 416], 99.50th=[ 453], 99.90th=[ 498], 99.95th=[ 510], 00:21:30.125 | 99.99th=[ 510] 00:21:30.125 bw ( KiB/s): min= 8192, max= 8192, per=41.06%, avg=8192.00, stdev= 0.00, samples=1 00:21:30.125 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:30.125 lat (usec) : 250=37.93%, 500=59.95%, 750=2.11% 00:21:30.125 cpu : usr=2.90%, sys=4.70%, ctx=3077, majf=0, minf=2 00:21:30.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:30.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.125 issued rwts: total=1536,1538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:30.125 job3: (groupid=0, jobs=1): err= 0: pid=330785: Wed Apr 24 10:16:43 2024 00:21:30.125 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:21:30.125 slat (nsec): min=11375, max=28646, avg=13083.50, stdev=3565.18 00:21:30.125 clat (usec): min=40489, max=41157, avg=40967.05, stdev=119.08 00:21:30.125 lat (usec): min=40500, max=41168, avg=40980.14, stdev=119.24 00:21:30.125 clat percentiles (usec): 00:21:30.125 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:21:30.125 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:30.125 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:30.125 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:30.125 | 99.99th=[41157] 00:21:30.125 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:21:30.125 slat (nsec): min=11187, max=58390, avg=13750.75, stdev=3680.81 00:21:30.125 clat (usec): min=190, max=313, avg=227.69, stdev=20.18 00:21:30.125 lat (usec): min=202, max=339, avg=241.44, stdev=20.87 00:21:30.125 clat percentiles (usec): 00:21:30.125 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:21:30.125 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:21:30.125 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 265], 00:21:30.125 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 314], 99.95th=[ 314], 00:21:30.125 | 99.99th=[ 314] 00:21:30.125 bw ( KiB/s): min= 4096, max= 4096, per=20.53%, avg=4096.00, stdev= 0.00, samples=1 00:21:30.125 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:30.125 lat (usec) : 250=82.40%, 500=13.48% 00:21:30.125 lat (msec) : 50=4.12% 00:21:30.125 cpu : usr=0.88%, sys=0.58%, ctx=534, majf=0, minf=1 00:21:30.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:30.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.125 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:30.125 00:21:30.125 Run status group 0 (all jobs): 00:21:30.125 READ: bw=14.9MiB/s (15.7MB/s), 85.7KiB/s-6138KiB/s (87.7kB/s-6285kB/s), io=15.3MiB (16.1MB), run=1001-1027msec 00:21:30.125 WRITE: bw=19.5MiB/s (20.4MB/s), 1994KiB/s-6146KiB/s (2042kB/s-6293kB/s), io=20.0MiB (21.0MB), run=1001-1027msec 00:21:30.125 00:21:30.125 Disk stats (read/write): 00:21:30.125 nvme0n1: ios=1074/1341, merge=0/0, ticks=511/303, in_queue=814, util=87.06% 00:21:30.125 nvme0n2: ios=1046/1214, merge=0/0, ticks=497/315, in_queue=812, util=87.28% 00:21:30.125 nvme0n3: ios=1176/1536, merge=0/0, ticks=1420/357, in_queue=1777, util=98.43% 00:21:30.125 nvme0n4: ios=34/512, merge=0/0, ticks=1165/106, in_queue=1271, util=91.05% 00:21:30.125 10:16:43 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:30.125 [global] 00:21:30.125 thread=1 00:21:30.125 invalidate=1 00:21:30.125 rw=randwrite 00:21:30.125 time_based=1 00:21:30.125 runtime=1 00:21:30.125 ioengine=libaio 00:21:30.125 direct=1 00:21:30.125 bs=4096 00:21:30.125 iodepth=1 00:21:30.125 norandommap=0 00:21:30.125 numjobs=1 00:21:30.125 00:21:30.125 verify_dump=1 00:21:30.125 verify_backlog=512 00:21:30.125 verify_state_save=0 00:21:30.125 do_verify=1 00:21:30.125 verify=crc32c-intel 00:21:30.125 [job0] 00:21:30.125 filename=/dev/nvme0n1 00:21:30.125 [job1] 00:21:30.125 filename=/dev/nvme0n2 00:21:30.125 [job2] 00:21:30.125 filename=/dev/nvme0n3 00:21:30.125 [job3] 00:21:30.125 filename=/dev/nvme0n4 00:21:30.125 Could not set queue depth (nvme0n1) 00:21:30.125 Could not set queue depth (nvme0n2) 00:21:30.125 Could not set queue depth (nvme0n3) 00:21:30.125 Could not set queue depth (nvme0n4) 00:21:30.383 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:30.383 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:30.383 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:30.383 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:30.383 fio-3.35 00:21:30.383 Starting 4 threads 00:21:31.758 00:21:31.758 job0: (groupid=0, jobs=1): err= 0: pid=331198: Wed Apr 24 10:16:44 2024 00:21:31.758 read: IOPS=1413, BW=5654KiB/s (5790kB/s)(5660KiB/1001msec) 00:21:31.758 slat (nsec): min=6988, max=39152, avg=8245.92, stdev=1602.48 00:21:31.758 clat (usec): min=383, max=675, avg=439.53, stdev=31.53 00:21:31.758 lat (usec): min=392, max=683, avg=447.77, stdev=31.68 00:21:31.758 clat percentiles (usec): 00:21:31.758 | 1.00th=[ 396], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 416], 00:21:31.758 | 30.00th=[ 424], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 437], 00:21:31.758 | 70.00th=[ 445], 80.00th=[ 457], 90.00th=[ 490], 95.00th=[ 502], 00:21:31.758 | 99.00th=[ 523], 99.50th=[ 553], 99.90th=[ 660], 99.95th=[ 676], 00:21:31.758 | 99.99th=[ 676] 00:21:31.758 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:21:31.758 slat (nsec): min=9028, max=65096, avg=11478.23, stdev=2016.88 00:21:31.758 clat (usec): min=182, max=576, avg=221.15, stdev=24.16 00:21:31.758 lat (usec): min=193, max=588, avg=232.62, stdev=24.69 00:21:31.758 clat percentiles (usec): 00:21:31.758 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 208], 00:21:31.758 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:21:31.758 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 247], 00:21:31.758 | 99.00th=[ 302], 99.50th=[ 375], 99.90th=[ 498], 99.95th=[ 578], 00:21:31.758 | 99.99th=[ 578] 00:21:31.758 bw ( KiB/s): min= 8192, max= 8192, per=41.19%, avg=8192.00, stdev= 0.00, samples=1 00:21:31.758 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:31.758 lat (usec) : 250=49.85%, 500=47.61%, 750=2.54% 00:21:31.758 cpu : usr=2.50%, sys=4.60%, ctx=2952, majf=0, minf=1 00:21:31.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:31.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.758 issued rwts: total=1415,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:31.758 job1: (groupid=0, jobs=1): err= 0: pid=331213: Wed Apr 24 10:16:44 2024 00:21:31.758 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:21:31.758 slat (nsec): min=7879, max=23349, avg=13925.18, stdev=6713.13 00:21:31.758 clat (usec): min=40787, max=42057, avg=41289.74, stdev=482.38 00:21:31.758 lat (usec): min=40797, max=42080, avg=41303.67, stdev=485.75 00:21:31.758 clat percentiles (usec): 00:21:31.758 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:21:31.758 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:31.758 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:31.758 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:31.758 | 99.99th=[42206] 00:21:31.758 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:21:31.758 slat (nsec): min=8852, max=47243, avg=10052.29, stdev=1996.18 00:21:31.758 clat (usec): min=188, max=521, avg=230.52, stdev=29.13 00:21:31.758 lat (usec): min=198, max=532, avg=240.57, stdev=29.78 00:21:31.758 clat percentiles (usec): 00:21:31.758 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:21:31.758 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:21:31.758 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 265], 00:21:31.758 | 99.00th=[ 310], 99.50th=[ 441], 99.90th=[ 523], 99.95th=[ 523], 00:21:31.758 | 99.99th=[ 523] 00:21:31.758 bw ( KiB/s): min= 4096, max= 4096, per=20.60%, avg=4096.00, stdev= 0.00, samples=1 00:21:31.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:31.758 lat (usec) : 250=80.52%, 500=15.17%, 750=0.19% 00:21:31.758 lat (msec) : 50=4.12% 00:21:31.758 cpu : usr=0.00%, sys=0.68%, ctx=535, majf=0, minf=1 00:21:31.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:31.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.758 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:31.758 job2: (groupid=0, jobs=1): err= 0: pid=331232: Wed Apr 24 10:16:44 2024 00:21:31.758 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:21:31.758 slat (nsec): min=7515, max=42688, avg=9040.04, stdev=1604.56 00:21:31.758 clat (usec): min=318, max=571, avg=378.57, stdev=47.78 00:21:31.758 lat (usec): min=326, max=580, avg=387.61, stdev=48.03 00:21:31.758 clat percentiles (usec): 00:21:31.758 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 351], 00:21:31.758 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 367], 00:21:31.758 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 486], 95.00th=[ 502], 00:21:31.758 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 553], 99.95th=[ 570], 00:21:31.758 | 99.99th=[ 570] 00:21:31.758 write: IOPS=1577, BW=6310KiB/s (6461kB/s)(6316KiB/1001msec); 0 zone resets 00:21:31.758 slat (nsec): min=10977, max=38983, avg=12647.46, stdev=1713.28 00:21:31.758 clat (usec): min=183, max=517, avg=237.18, stdev=26.54 00:21:31.758 lat (usec): min=195, max=556, avg=249.83, stdev=26.87 00:21:31.758 clat percentiles (usec): 00:21:31.758 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:21:31.758 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:21:31.758 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 281], 00:21:31.758 | 99.00th=[ 330], 99.50th=[ 367], 99.90th=[ 457], 99.95th=[ 519], 00:21:31.758 | 99.99th=[ 519] 00:21:31.758 bw ( KiB/s): min= 8192, max= 8192, per=41.19%, avg=8192.00, stdev= 0.00, samples=1 00:21:31.758 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:31.758 lat (usec) : 250=40.16%, 500=56.98%, 750=2.86% 00:21:31.758 cpu : usr=2.40%, sys=5.60%, ctx=3116, majf=0, minf=1 00:21:31.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:31.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.758 issued rwts: total=1536,1579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:31.758 job3: (groupid=0, jobs=1): err= 0: pid=331238: Wed Apr 24 10:16:44 2024 00:21:31.758 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:21:31.758 slat (nsec): min=7392, max=24530, avg=8752.63, stdev=1552.65 00:21:31.758 clat (usec): min=322, max=42014, avg=599.70, stdev=3146.34 00:21:31.758 lat (usec): min=330, max=42037, avg=608.45, stdev=3147.26 00:21:31.758 clat percentiles (usec): 00:21:31.758 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 347], 00:21:31.758 | 30.00th=[ 351], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:21:31.758 | 70.00th=[ 363], 80.00th=[ 367], 90.00th=[ 375], 95.00th=[ 383], 00:21:31.758 | 99.00th=[ 494], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:21:31.758 | 99.99th=[42206] 00:21:31.759 write: IOPS=1507, BW=6030KiB/s (6175kB/s)(6036KiB/1001msec); 0 zone resets 00:21:31.759 slat (nsec): min=9405, max=37595, avg=11703.87, stdev=1738.06 00:21:31.759 clat (usec): min=179, max=553, avg=233.36, stdev=25.36 00:21:31.759 lat (usec): min=190, max=563, avg=245.06, stdev=25.55 00:21:31.759 clat percentiles (usec): 00:21:31.759 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:21:31.759 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:21:31.759 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 273], 00:21:31.759 | 99.00th=[ 322], 99.50th=[ 367], 99.90th=[ 474], 99.95th=[ 553], 00:21:31.759 | 99.99th=[ 553] 00:21:31.759 bw ( KiB/s): min= 4096, max= 4096, per=20.60%, avg=4096.00, stdev= 0.00, samples=1 00:21:31.759 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:31.759 lat (usec) : 250=52.11%, 500=47.57%, 750=0.08% 00:21:31.759 lat (msec) : 50=0.24% 00:21:31.759 cpu : usr=2.30%, sys=3.40%, ctx=2535, majf=0, minf=2 00:21:31.759 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:31.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.759 issued rwts: total=1024,1509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.759 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:31.759 00:21:31.759 Run status group 0 (all jobs): 00:21:31.759 READ: bw=15.1MiB/s (15.8MB/s), 85.2KiB/s-6138KiB/s (87.2kB/s-6285kB/s), io=15.6MiB (16.4MB), run=1001-1033msec 00:21:31.759 WRITE: bw=19.4MiB/s (20.4MB/s), 1983KiB/s-6310KiB/s (2030kB/s-6461kB/s), io=20.1MiB (21.0MB), run=1001-1033msec 00:21:31.759 00:21:31.759 Disk stats (read/write): 00:21:31.759 nvme0n1: ios=1134/1536, merge=0/0, ticks=495/318, in_queue=813, util=87.07% 00:21:31.759 nvme0n2: ios=52/512, merge=0/0, ticks=935/114, in_queue=1049, util=97.97% 00:21:31.759 nvme0n3: ios=1248/1536, merge=0/0, ticks=1300/352, in_queue=1652, util=98.23% 00:21:31.759 nvme0n4: ios=972/1024, merge=0/0, ticks=1523/239, in_queue=1762, util=98.32% 00:21:31.759 10:16:44 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:31.759 [global] 00:21:31.759 thread=1 00:21:31.759 invalidate=1 00:21:31.759 rw=write 00:21:31.759 time_based=1 00:21:31.759 runtime=1 00:21:31.759 ioengine=libaio 00:21:31.759 direct=1 00:21:31.759 bs=4096 00:21:31.759 iodepth=128 00:21:31.759 norandommap=0 00:21:31.759 numjobs=1 00:21:31.759 00:21:31.759 verify_dump=1 00:21:31.759 verify_backlog=512 00:21:31.759 verify_state_save=0 00:21:31.759 do_verify=1 00:21:31.759 verify=crc32c-intel 00:21:31.759 [job0] 00:21:31.759 filename=/dev/nvme0n1 00:21:31.759 [job1] 00:21:31.759 filename=/dev/nvme0n2 00:21:31.759 [job2] 00:21:31.759 filename=/dev/nvme0n3 00:21:31.759 [job3] 00:21:31.759 filename=/dev/nvme0n4 00:21:31.759 Could not set queue depth (nvme0n1) 00:21:31.759 Could not set queue depth (nvme0n2) 00:21:31.759 Could not set queue depth (nvme0n3) 00:21:31.759 Could not set queue depth (nvme0n4) 00:21:31.759 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:31.759 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:31.759 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:31.759 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:31.759 fio-3.35 00:21:31.759 Starting 4 threads 00:21:33.135 00:21:33.135 job0: (groupid=0, jobs=1): err= 0: pid=331640: Wed Apr 24 10:16:46 2024 00:21:33.135 read: IOPS=1976, BW=7905KiB/s (8095kB/s)(7992KiB/1011msec) 00:21:33.135 slat (nsec): min=1324, max=25110k, avg=217242.17, stdev=1445846.79 00:21:33.135 clat (usec): min=3217, max=92436, avg=25719.55, stdev=12855.45 00:21:33.135 lat (usec): min=8486, max=92443, avg=25936.79, stdev=12953.65 00:21:33.135 clat percentiles (usec): 00:21:33.135 | 1.00th=[ 8979], 5.00th=[11338], 10.00th=[13435], 20.00th=[18220], 00:21:33.135 | 30.00th=[19006], 40.00th=[20317], 50.00th=[21365], 60.00th=[25297], 00:21:33.135 | 70.00th=[28181], 80.00th=[32375], 90.00th=[39584], 95.00th=[47973], 00:21:33.135 | 99.00th=[83362], 99.50th=[87557], 99.90th=[92799], 99.95th=[92799], 00:21:33.135 | 99.99th=[92799] 00:21:33.135 write: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec); 0 zone resets 00:21:33.135 slat (usec): min=2, max=13343, avg=273.35, stdev=1309.91 00:21:33.135 clat (usec): min=1696, max=120331, avg=37516.63, stdev=34378.30 00:21:33.135 lat (usec): min=1710, max=120341, avg=37789.98, stdev=34619.07 00:21:33.135 clat percentiles (msec): 00:21:33.136 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 16], 00:21:33.136 | 30.00th=[ 20], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 23], 00:21:33.136 | 70.00th=[ 26], 80.00th=[ 79], 90.00th=[ 106], 95.00th=[ 111], 00:21:33.136 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 121], 00:21:33.136 | 99.99th=[ 121] 00:21:33.136 bw ( KiB/s): min= 7016, max= 9368, per=11.57%, avg=8192.00, stdev=1663.12, samples=2 00:21:33.136 iops : min= 1754, max= 2342, avg=2048.00, stdev=415.78, samples=2 00:21:33.136 lat (msec) : 2=0.12%, 4=0.42%, 10=3.81%, 20=29.26%, 50=52.40% 00:21:33.136 lat (msec) : 100=7.07%, 250=6.92% 00:21:33.136 cpu : usr=1.68%, sys=2.77%, ctx=265, majf=0, minf=1 00:21:33.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:33.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.136 issued rwts: total=1998,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.136 job1: (groupid=0, jobs=1): err= 0: pid=331641: Wed Apr 24 10:16:46 2024 00:21:33.136 read: IOPS=6233, BW=24.3MiB/s (25.5MB/s)(24.5MiB/1006msec) 00:21:33.136 slat (nsec): min=1506, max=13543k, avg=77021.68, stdev=566390.69 00:21:33.136 clat (usec): min=1961, max=25492, avg=10550.42, stdev=3135.61 00:21:33.136 lat (usec): min=5072, max=25502, avg=10627.44, stdev=3153.38 00:21:33.136 clat percentiles (usec): 00:21:33.136 | 1.00th=[ 5866], 5.00th=[ 6652], 10.00th=[ 7504], 20.00th=[ 8094], 00:21:33.136 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10683], 00:21:33.136 | 70.00th=[11469], 80.00th=[12387], 90.00th=[14877], 95.00th=[16319], 00:21:33.136 | 99.00th=[21890], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:21:33.136 | 99.99th=[25560] 00:21:33.136 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:21:33.136 slat (usec): min=2, max=9022, avg=72.11, stdev=502.90 00:21:33.136 clat (usec): min=2042, max=22798, avg=9228.13, stdev=3084.06 00:21:33.136 lat (usec): min=2085, max=22806, avg=9300.25, stdev=3090.28 00:21:33.136 clat percentiles (usec): 00:21:33.136 | 1.00th=[ 3261], 5.00th=[ 4883], 10.00th=[ 5735], 20.00th=[ 6521], 00:21:33.136 | 30.00th=[ 7373], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:21:33.136 | 70.00th=[10421], 80.00th=[11076], 90.00th=[13698], 95.00th=[14746], 00:21:33.136 | 99.00th=[18220], 99.50th=[20055], 99.90th=[22152], 99.95th=[22676], 00:21:33.136 | 99.99th=[22676] 00:21:33.136 bw ( KiB/s): min=24608, max=28632, per=37.60%, avg=26620.00, stdev=2845.40, samples=2 00:21:33.136 iops : min= 6152, max= 7158, avg=6655.00, stdev=711.35, samples=2 00:21:33.136 lat (msec) : 2=0.01%, 4=1.13%, 10=57.24%, 20=40.64%, 50=0.97% 00:21:33.136 cpu : usr=5.97%, sys=6.67%, ctx=438, majf=0, minf=1 00:21:33.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:33.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.136 issued rwts: total=6271,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.136 job2: (groupid=0, jobs=1): err= 0: pid=331642: Wed Apr 24 10:16:46 2024 00:21:33.136 read: IOPS=3388, BW=13.2MiB/s (13.9MB/s)(13.4MiB/1011msec) 00:21:33.136 slat (nsec): min=1554, max=31793k, avg=158612.20, stdev=1291036.17 00:21:33.136 clat (usec): min=7081, max=71312, avg=20262.11, stdev=9481.97 00:21:33.136 lat (usec): min=7087, max=71341, avg=20420.72, stdev=9584.79 00:21:33.136 clat percentiles (usec): 00:21:33.136 | 1.00th=[ 8848], 5.00th=[10028], 10.00th=[10683], 20.00th=[12387], 00:21:33.136 | 30.00th=[15008], 40.00th=[16712], 50.00th=[18482], 60.00th=[19530], 00:21:33.136 | 70.00th=[22676], 80.00th=[25035], 90.00th=[33817], 95.00th=[40109], 00:21:33.136 | 99.00th=[55837], 99.50th=[57934], 99.90th=[58983], 99.95th=[60031], 00:21:33.136 | 99.99th=[71828] 00:21:33.136 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:21:33.136 slat (usec): min=2, max=19059, avg=120.54, stdev=854.60 00:21:33.136 clat (usec): min=4215, max=60218, avg=16393.56, stdev=6368.16 00:21:33.136 lat (usec): min=4230, max=60221, avg=16514.10, stdev=6406.49 00:21:33.136 clat percentiles (usec): 00:21:33.136 | 1.00th=[ 5473], 5.00th=[ 8029], 10.00th=[ 9110], 20.00th=[10683], 00:21:33.136 | 30.00th=[11469], 40.00th=[13304], 50.00th=[15401], 60.00th=[18220], 00:21:33.136 | 70.00th=[21365], 80.00th=[22414], 90.00th=[23462], 95.00th=[25035], 00:21:33.136 | 99.00th=[31851], 99.50th=[31851], 99.90th=[60031], 99.95th=[60031], 00:21:33.136 | 99.99th=[60031] 00:21:33.136 bw ( KiB/s): min=13056, max=15616, per=20.25%, avg=14336.00, stdev=1810.19, samples=2 00:21:33.136 iops : min= 3264, max= 3904, avg=3584.00, stdev=452.55, samples=2 00:21:33.136 lat (msec) : 10=10.63%, 20=52.72%, 50=35.41%, 100=1.24% 00:21:33.136 cpu : usr=3.76%, sys=4.26%, ctx=259, majf=0, minf=1 00:21:33.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:33.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.136 issued rwts: total=3426,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.136 job3: (groupid=0, jobs=1): err= 0: pid=331643: Wed Apr 24 10:16:46 2024 00:21:33.136 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:21:33.136 slat (nsec): min=1057, max=26387k, avg=77995.21, stdev=765735.12 00:21:33.136 clat (usec): min=945, max=42976, avg=12715.57, stdev=6785.71 00:21:33.136 lat (usec): min=952, max=42981, avg=12793.57, stdev=6808.47 00:21:33.136 clat percentiles (usec): 00:21:33.136 | 1.00th=[ 2180], 5.00th=[ 4490], 10.00th=[ 5800], 20.00th=[ 8291], 00:21:33.136 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:21:33.136 | 70.00th=[13173], 80.00th=[15270], 90.00th=[22676], 95.00th=[27657], 00:21:33.136 | 99.00th=[34866], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:21:33.136 | 99.99th=[42730] 00:21:33.136 write: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(21.9MiB/1006msec); 0 zone resets 00:21:33.136 slat (nsec): min=1822, max=21724k, avg=67045.65, stdev=530594.03 00:21:33.136 clat (usec): min=501, max=32077, avg=11197.59, stdev=4827.85 00:21:33.136 lat (usec): min=513, max=34649, avg=11264.64, stdev=4850.93 00:21:33.136 clat percentiles (usec): 00:21:33.136 | 1.00th=[ 1549], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 8029], 00:21:33.136 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[11338], 60.00th=[11863], 00:21:33.136 | 70.00th=[12256], 80.00th=[12387], 90.00th=[16450], 95.00th=[21627], 00:21:33.136 | 99.00th=[26870], 99.50th=[30016], 99.90th=[32113], 99.95th=[32113], 00:21:33.136 | 99.99th=[32113] 00:21:33.136 bw ( KiB/s): min=21304, max=22512, per=30.95%, avg=21908.00, stdev=854.18, samples=2 00:21:33.136 iops : min= 5326, max= 5628, avg=5477.00, stdev=213.55, samples=2 00:21:33.136 lat (usec) : 750=0.07%, 1000=0.19% 00:21:33.136 lat (msec) : 2=0.86%, 4=2.47%, 10=30.55%, 20=56.79%, 50=9.08% 00:21:33.136 cpu : usr=3.48%, sys=4.48%, ctx=549, majf=0, minf=1 00:21:33.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:33.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.136 issued rwts: total=5120,5604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.136 00:21:33.136 Run status group 0 (all jobs): 00:21:33.136 READ: bw=65.0MiB/s (68.1MB/s), 7905KiB/s-24.3MiB/s (8095kB/s-25.5MB/s), io=65.7MiB (68.9MB), run=1006-1011msec 00:21:33.136 WRITE: bw=69.1MiB/s (72.5MB/s), 8103KiB/s-25.8MiB/s (8297kB/s-27.1MB/s), io=69.9MiB (73.3MB), run=1006-1011msec 00:21:33.136 00:21:33.136 Disk stats (read/write): 00:21:33.136 nvme0n1: ios=1475/1536, merge=0/0, ticks=37227/67331, in_queue=104558, util=86.96% 00:21:33.136 nvme0n2: ios=5277/5632, merge=0/0, ticks=54080/49509, in_queue=103589, util=87.21% 00:21:33.136 nvme0n3: ios=2894/3072, merge=0/0, ticks=57851/46577, in_queue=104428, util=88.88% 00:21:33.136 nvme0n4: ios=4276/4608, merge=0/0, ticks=50612/44278, in_queue=94890, util=89.63% 00:21:33.136 10:16:46 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:33.136 [global] 00:21:33.136 thread=1 00:21:33.136 invalidate=1 00:21:33.136 rw=randwrite 00:21:33.136 time_based=1 00:21:33.136 runtime=1 00:21:33.136 ioengine=libaio 00:21:33.136 direct=1 00:21:33.136 bs=4096 00:21:33.136 iodepth=128 00:21:33.136 norandommap=0 00:21:33.136 numjobs=1 00:21:33.136 00:21:33.136 verify_dump=1 00:21:33.136 verify_backlog=512 00:21:33.136 verify_state_save=0 00:21:33.136 do_verify=1 00:21:33.136 verify=crc32c-intel 00:21:33.136 [job0] 00:21:33.136 filename=/dev/nvme0n1 00:21:33.136 [job1] 00:21:33.136 filename=/dev/nvme0n2 00:21:33.136 [job2] 00:21:33.136 filename=/dev/nvme0n3 00:21:33.136 [job3] 00:21:33.136 filename=/dev/nvme0n4 00:21:33.136 Could not set queue depth (nvme0n1) 00:21:33.136 Could not set queue depth (nvme0n2) 00:21:33.136 Could not set queue depth (nvme0n3) 00:21:33.136 Could not set queue depth (nvme0n4) 00:21:33.428 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:33.428 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:33.428 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:33.428 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:33.428 fio-3.35 00:21:33.428 Starting 4 threads 00:21:34.849 00:21:34.849 job0: (groupid=0, jobs=1): err= 0: pid=332015: Wed Apr 24 10:16:47 2024 00:21:34.849 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:21:34.849 slat (nsec): min=1020, max=17186k, avg=87718.18, stdev=592213.56 00:21:34.849 clat (usec): min=1599, max=33332, avg=11905.91, stdev=4294.16 00:21:34.849 lat (usec): min=1602, max=33338, avg=11993.63, stdev=4317.88 00:21:34.849 clat percentiles (usec): 00:21:34.849 | 1.00th=[ 3294], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[ 9372], 00:21:34.849 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:21:34.849 | 70.00th=[12125], 80.00th=[13304], 90.00th=[15926], 95.00th=[23200], 00:21:34.849 | 99.00th=[26870], 99.50th=[29230], 99.90th=[29492], 99.95th=[33424], 00:21:34.849 | 99.99th=[33424] 00:21:34.849 write: IOPS=4702, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec); 0 zone resets 00:21:34.849 slat (usec): min=2, max=23410, avg=116.77, stdev=836.81 00:21:34.849 clat (usec): min=2433, max=57666, avg=14927.54, stdev=9916.25 00:21:34.849 lat (usec): min=2465, max=57674, avg=15044.31, stdev=9963.58 00:21:34.849 clat percentiles (usec): 00:21:34.849 | 1.00th=[ 3294], 5.00th=[ 6325], 10.00th=[ 8717], 20.00th=[ 9634], 00:21:34.849 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11338], 60.00th=[12125], 00:21:34.849 | 70.00th=[13698], 80.00th=[18744], 90.00th=[27132], 95.00th=[38011], 00:21:34.849 | 99.00th=[57410], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:21:34.849 | 99.99th=[57410] 00:21:34.849 bw ( KiB/s): min=16384, max=20512, per=25.95%, avg=18448.00, stdev=2918.94, samples=2 00:21:34.849 iops : min= 4096, max= 5128, avg=4612.00, stdev=729.73, samples=2 00:21:34.849 lat (msec) : 2=0.18%, 4=1.19%, 10=26.27%, 20=60.10%, 50=11.32% 00:21:34.849 lat (msec) : 100=0.93% 00:21:34.849 cpu : usr=1.49%, sys=5.58%, ctx=494, majf=0, minf=1 00:21:34.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:34.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:34.849 issued rwts: total=4608,4726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:34.849 job1: (groupid=0, jobs=1): err= 0: pid=332016: Wed Apr 24 10:16:47 2024 00:21:34.849 read: IOPS=4317, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1008msec) 00:21:34.849 slat (nsec): min=1038, max=18213k, avg=122701.17, stdev=914430.56 00:21:34.849 clat (usec): min=601, max=72348, avg=15067.65, stdev=10524.80 00:21:34.849 lat (usec): min=3796, max=72376, avg=15190.35, stdev=10603.05 00:21:34.849 clat percentiles (usec): 00:21:34.849 | 1.00th=[ 4686], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9765], 00:21:34.849 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[12256], 00:21:34.849 | 70.00th=[13042], 80.00th=[17171], 90.00th=[26608], 95.00th=[39584], 00:21:34.849 | 99.00th=[61604], 99.50th=[66323], 99.90th=[66847], 99.95th=[66847], 00:21:34.849 | 99.99th=[71828] 00:21:34.849 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:21:34.849 slat (nsec): min=1764, max=13726k, avg=91008.45, stdev=561374.22 00:21:34.849 clat (usec): min=1641, max=40042, avg=13522.93, stdev=6487.73 00:21:34.849 lat (usec): min=1645, max=40050, avg=13613.94, stdev=6504.21 00:21:34.849 clat percentiles (usec): 00:21:34.849 | 1.00th=[ 3326], 5.00th=[ 7111], 10.00th=[ 7570], 20.00th=[ 8979], 00:21:34.849 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11600], 60.00th=[11994], 00:21:34.849 | 70.00th=[13829], 80.00th=[18220], 90.00th=[22676], 95.00th=[28181], 00:21:34.849 | 99.00th=[35390], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:21:34.850 | 99.99th=[40109] 00:21:34.850 bw ( KiB/s): min=16384, max=20480, per=25.93%, avg=18432.00, stdev=2896.31, samples=2 00:21:34.850 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:21:34.850 lat (usec) : 750=0.01% 00:21:34.850 lat (msec) : 2=0.04%, 4=0.65%, 10=24.98%, 20=57.85%, 50=15.06% 00:21:34.850 lat (msec) : 100=1.42% 00:21:34.850 cpu : usr=1.89%, sys=4.97%, ctx=435, majf=0, minf=1 00:21:34.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:34.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:34.850 issued rwts: total=4352,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:34.850 job2: (groupid=0, jobs=1): err= 0: pid=332018: Wed Apr 24 10:16:47 2024 00:21:34.850 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:21:34.850 slat (nsec): min=1210, max=31264k, avg=129141.93, stdev=1008622.12 00:21:34.850 clat (usec): min=1211, max=48997, avg=17682.00, stdev=8599.24 00:21:34.850 lat (usec): min=1235, max=53986, avg=17811.14, stdev=8669.32 00:21:34.850 clat percentiles (usec): 00:21:34.850 | 1.00th=[ 4015], 5.00th=[ 6194], 10.00th=[10028], 20.00th=[11994], 00:21:34.850 | 30.00th=[12780], 40.00th=[13435], 50.00th=[15139], 60.00th=[18220], 00:21:34.850 | 70.00th=[19792], 80.00th=[22938], 90.00th=[29492], 95.00th=[37487], 00:21:34.850 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:21:34.850 | 99.99th=[49021] 00:21:34.850 write: IOPS=3884, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1007msec); 0 zone resets 00:21:34.850 slat (nsec): min=1818, max=12914k, avg=114484.13, stdev=738934.45 00:21:34.850 clat (usec): min=803, max=68150, avg=16490.33, stdev=11273.71 00:21:34.850 lat (usec): min=847, max=68155, avg=16604.81, stdev=11340.79 00:21:34.850 clat percentiles (usec): 00:21:34.850 | 1.00th=[ 3261], 5.00th=[ 6652], 10.00th=[ 7570], 20.00th=[ 8979], 00:21:34.850 | 30.00th=[10159], 40.00th=[11338], 50.00th=[12911], 60.00th=[14484], 00:21:34.850 | 70.00th=[19006], 80.00th=[22676], 90.00th=[29230], 95.00th=[34341], 00:21:34.850 | 99.00th=[66847], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:21:34.850 | 99.99th=[67634] 00:21:34.850 bw ( KiB/s): min=13888, max=16384, per=21.29%, avg=15136.00, stdev=1764.94, samples=2 00:21:34.850 iops : min= 3472, max= 4096, avg=3784.00, stdev=441.23, samples=2 00:21:34.850 lat (usec) : 1000=0.05% 00:21:34.850 lat (msec) : 2=0.20%, 4=1.44%, 10=18.33%, 20=51.33%, 50=27.16% 00:21:34.850 lat (msec) : 100=1.48% 00:21:34.850 cpu : usr=2.68%, sys=3.88%, ctx=324, majf=0, minf=1 00:21:34.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:34.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:34.850 issued rwts: total=3584,3912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:34.850 job3: (groupid=0, jobs=1): err= 0: pid=332019: Wed Apr 24 10:16:47 2024 00:21:34.850 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:21:34.850 slat (nsec): min=1194, max=12528k, avg=107823.47, stdev=769518.34 00:21:34.850 clat (usec): min=5691, max=38310, avg=13449.62, stdev=4221.05 00:21:34.850 lat (usec): min=6135, max=38314, avg=13557.44, stdev=4274.60 00:21:34.850 clat percentiles (usec): 00:21:34.850 | 1.00th=[ 6652], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10421], 00:21:34.850 | 30.00th=[10814], 40.00th=[11994], 50.00th=[12911], 60.00th=[13698], 00:21:34.850 | 70.00th=[14484], 80.00th=[15533], 90.00th=[18744], 95.00th=[21627], 00:21:34.850 | 99.00th=[28443], 99.50th=[33817], 99.90th=[38536], 99.95th=[38536], 00:21:34.850 | 99.99th=[38536] 00:21:34.850 write: IOPS=4654, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1003msec); 0 zone resets 00:21:34.850 slat (nsec): min=1981, max=10924k, avg=99385.20, stdev=553306.32 00:21:34.850 clat (usec): min=1690, max=38259, avg=13951.38, stdev=7342.75 00:21:34.850 lat (usec): min=1702, max=38264, avg=14050.77, stdev=7387.93 00:21:34.850 clat percentiles (usec): 00:21:34.850 | 1.00th=[ 3687], 5.00th=[ 6063], 10.00th=[ 7111], 20.00th=[ 8586], 00:21:34.850 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11600], 60.00th=[12518], 00:21:34.850 | 70.00th=[14222], 80.00th=[17433], 90.00th=[28967], 95.00th=[30278], 00:21:34.850 | 99.00th=[32900], 99.50th=[33817], 99.90th=[37487], 99.95th=[37487], 00:21:34.850 | 99.99th=[38011] 00:21:34.850 bw ( KiB/s): min=15368, max=21552, per=25.97%, avg=18460.00, stdev=4372.75, samples=2 00:21:34.850 iops : min= 3842, max= 5388, avg=4615.00, stdev=1093.19, samples=2 00:21:34.850 lat (msec) : 2=0.06%, 4=0.82%, 10=20.52%, 20=66.93%, 50=11.68% 00:21:34.850 cpu : usr=2.89%, sys=4.19%, ctx=417, majf=0, minf=1 00:21:34.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:34.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:34.850 issued rwts: total=4608,4668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:34.850 00:21:34.850 Run status group 0 (all jobs): 00:21:34.850 READ: bw=66.5MiB/s (69.7MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=67.0MiB (70.3MB), run=1003-1008msec 00:21:34.850 WRITE: bw=69.4MiB/s (72.8MB/s), 15.2MiB/s-18.4MiB/s (15.9MB/s-19.3MB/s), io=70.0MiB (73.4MB), run=1003-1008msec 00:21:34.850 00:21:34.850 Disk stats (read/write): 00:21:34.850 nvme0n1: ios=3738/4096, merge=0/0, ticks=24530/35005, in_queue=59535, util=97.49% 00:21:34.850 nvme0n2: ios=4102/4108, merge=0/0, ticks=36421/40238, in_queue=76659, util=87.21% 00:21:34.850 nvme0n3: ios=2804/3072, merge=0/0, ticks=38396/35899, in_queue=74295, util=89.07% 00:21:34.850 nvme0n4: ios=3603/4096, merge=0/0, ticks=48527/56998, in_queue=105525, util=98.32% 00:21:34.850 10:16:47 -- target/fio.sh@55 -- # sync 00:21:34.850 10:16:47 -- target/fio.sh@59 -- # fio_pid=332245 00:21:34.850 10:16:47 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:34.850 10:16:47 -- target/fio.sh@61 -- # sleep 3 00:21:34.850 [global] 00:21:34.850 thread=1 00:21:34.850 invalidate=1 00:21:34.850 rw=read 00:21:34.850 time_based=1 00:21:34.850 runtime=10 00:21:34.850 ioengine=libaio 00:21:34.850 direct=1 00:21:34.850 bs=4096 00:21:34.850 iodepth=1 00:21:34.850 norandommap=1 00:21:34.850 numjobs=1 00:21:34.850 00:21:34.850 [job0] 00:21:34.850 filename=/dev/nvme0n1 00:21:34.850 [job1] 00:21:34.850 filename=/dev/nvme0n2 00:21:34.850 [job2] 00:21:34.850 filename=/dev/nvme0n3 00:21:34.850 [job3] 00:21:34.850 filename=/dev/nvme0n4 00:21:34.850 Could not set queue depth (nvme0n1) 00:21:34.850 Could not set queue depth (nvme0n2) 00:21:34.850 Could not set queue depth (nvme0n3) 00:21:34.850 Could not set queue depth (nvme0n4) 00:21:35.108 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:35.108 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:35.108 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:35.108 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:35.108 fio-3.35 00:21:35.108 Starting 4 threads 00:21:37.638 10:16:50 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:37.895 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=23678976, buflen=4096 00:21:37.896 fio: pid=332404, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:37.896 10:16:51 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:38.154 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26021888, buflen=4096 00:21:38.154 fio: pid=332403, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:38.154 10:16:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:38.154 10:16:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:38.154 10:16:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:38.154 10:16:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:38.154 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2039808, buflen=4096 00:21:38.154 fio: pid=332401, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:38.413 10:16:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:38.413 10:16:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:38.413 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=8310784, buflen=4096 00:21:38.413 fio: pid=332402, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:38.413 00:21:38.413 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=332401: Wed Apr 24 10:16:51 2024 00:21:38.413 read: IOPS=161, BW=643KiB/s (658kB/s)(1992KiB/3098msec) 00:21:38.413 slat (usec): min=7, max=12617, avg=37.24, stdev=564.42 00:21:38.413 clat (usec): min=212, max=42918, avg=6137.73, stdev=14252.15 00:21:38.413 lat (usec): min=222, max=54002, avg=6175.00, stdev=14329.30 00:21:38.413 clat percentiles (usec): 00:21:38.413 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 265], 00:21:38.413 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 330], 00:21:38.413 | 70.00th=[ 469], 80.00th=[ 510], 90.00th=[41157], 95.00th=[41157], 00:21:38.413 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:21:38.413 | 99.99th=[42730] 00:21:38.413 bw ( KiB/s): min= 96, max= 3456, per=4.37%, avg=774.40, stdev=1499.09, samples=5 00:21:38.413 iops : min= 24, max= 864, avg=193.60, stdev=374.77, samples=5 00:21:38.413 lat (usec) : 250=15.43%, 500=61.92%, 750=8.02%, 1000=0.20% 00:21:38.413 lat (msec) : 50=14.23% 00:21:38.413 cpu : usr=0.03%, sys=0.29%, ctx=502, majf=0, minf=1 00:21:38.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.413 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.413 issued rwts: total=499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:38.413 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=332402: Wed Apr 24 10:16:51 2024 00:21:38.413 read: IOPS=612, BW=2449KiB/s (2508kB/s)(8116KiB/3314msec) 00:21:38.413 slat (usec): min=6, max=26619, avg=26.21, stdev=616.05 00:21:38.413 clat (usec): min=243, max=42285, avg=1592.42, stdev=6799.29 00:21:38.413 lat (usec): min=250, max=67931, avg=1618.63, stdev=6931.91 00:21:38.413 clat percentiles (usec): 00:21:38.413 | 1.00th=[ 289], 5.00th=[ 330], 10.00th=[ 351], 20.00th=[ 379], 00:21:38.413 | 30.00th=[ 400], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 441], 00:21:38.413 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 506], 95.00th=[ 529], 00:21:38.413 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:38.413 | 99.99th=[42206] 00:21:38.413 bw ( KiB/s): min= 99, max= 9048, per=15.24%, avg=2696.50, stdev=3522.46, samples=6 00:21:38.413 iops : min= 24, max= 2262, avg=674.00, stdev=880.73, samples=6 00:21:38.413 lat (usec) : 250=0.05%, 500=88.67%, 750=8.18%, 1000=0.10% 00:21:38.413 lat (msec) : 2=0.10%, 50=2.86% 00:21:38.413 cpu : usr=0.33%, sys=1.03%, ctx=2034, majf=0, minf=1 00:21:38.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.413 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.413 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:38.413 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=332403: Wed Apr 24 10:16:51 2024 00:21:38.413 read: IOPS=2202, BW=8808KiB/s (9020kB/s)(24.8MiB/2885msec) 00:21:38.413 slat (nsec): min=7453, max=40264, avg=8642.97, stdev=1450.29 00:21:38.413 clat (usec): min=380, max=2224, avg=440.13, stdev=39.26 00:21:38.413 lat (usec): min=390, max=2233, avg=448.77, stdev=39.37 00:21:38.413 clat percentiles (usec): 00:21:38.413 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 424], 00:21:38.413 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 441], 00:21:38.413 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 461], 95.00th=[ 474], 00:21:38.413 | 99.00th=[ 537], 99.50th=[ 635], 99.90th=[ 734], 99.95th=[ 971], 00:21:38.413 | 99.99th=[ 2212] 00:21:38.413 bw ( KiB/s): min= 8680, max= 8928, per=49.79%, avg=8811.20, stdev=88.65, samples=5 00:21:38.413 iops : min= 2170, max= 2232, avg=2202.80, stdev=22.16, samples=5 00:21:38.413 lat (usec) : 500=98.33%, 750=1.56%, 1000=0.05% 00:21:38.413 lat (msec) : 2=0.03%, 4=0.02% 00:21:38.413 cpu : usr=1.32%, sys=3.61%, ctx=6355, majf=0, minf=1 00:21:38.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.413 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.413 issued rwts: total=6354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:38.413 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=332404: Wed Apr 24 10:16:51 2024 00:21:38.413 read: IOPS=2141, BW=8564KiB/s (8770kB/s)(22.6MiB/2700msec) 00:21:38.413 slat (usec): min=3, max=101, avg= 8.51, stdev= 4.42 00:21:38.413 clat (usec): min=301, max=42084, avg=453.33, stdev=554.43 00:21:38.413 lat (usec): min=308, max=42091, avg=461.84, stdev=554.85 00:21:38.413 clat percentiles (usec): 00:21:38.413 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 388], 00:21:38.413 | 30.00th=[ 416], 40.00th=[ 424], 50.00th=[ 433], 60.00th=[ 441], 00:21:38.413 | 70.00th=[ 449], 80.00th=[ 478], 90.00th=[ 578], 95.00th=[ 652], 00:21:38.413 | 99.00th=[ 701], 99.50th=[ 717], 99.90th=[ 840], 99.95th=[ 865], 00:21:38.413 | 99.99th=[42206] 00:21:38.413 bw ( KiB/s): min= 6384, max=10456, per=49.49%, avg=8758.40, stdev=1474.60, samples=5 00:21:38.413 iops : min= 1596, max= 2614, avg=2189.60, stdev=368.65, samples=5 00:21:38.413 lat (usec) : 500=82.90%, 750=16.91%, 1000=0.16% 00:21:38.413 lat (msec) : 50=0.02% 00:21:38.413 cpu : usr=0.70%, sys=2.22%, ctx=5782, majf=0, minf=2 00:21:38.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.413 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.413 issued rwts: total=5782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:38.413 00:21:38.413 Run status group 0 (all jobs): 00:21:38.413 READ: bw=17.3MiB/s (18.1MB/s), 643KiB/s-8808KiB/s (658kB/s-9020kB/s), io=57.3MiB (60.1MB), run=2700-3314msec 00:21:38.413 00:21:38.413 Disk stats (read/write): 00:21:38.413 nvme0n1: ios=493/0, merge=0/0, ticks=2849/0, in_queue=2849, util=95.36% 00:21:38.413 nvme0n2: ios=2025/0, merge=0/0, ticks=3038/0, in_queue=3038, util=95.52% 00:21:38.413 nvme0n3: ios=6382/0, merge=0/0, ticks=3679/0, in_queue=3679, util=99.70% 00:21:38.413 nvme0n4: ios=5617/0, merge=0/0, ticks=2515/0, in_queue=2515, util=96.41% 00:21:38.672 10:16:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:38.673 10:16:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:38.932 10:16:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:38.932 10:16:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:38.932 10:16:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:38.932 10:16:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:39.191 10:16:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:39.191 10:16:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:39.450 10:16:52 -- target/fio.sh@69 -- # fio_status=0 00:21:39.450 10:16:52 -- target/fio.sh@70 -- # wait 332245 00:21:39.450 10:16:52 -- target/fio.sh@70 -- # fio_status=4 00:21:39.450 10:16:52 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:39.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:39.450 10:16:52 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:39.450 10:16:52 -- common/autotest_common.sh@1198 -- # local i=0 00:21:39.450 10:16:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:39.450 10:16:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:39.450 10:16:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:39.450 10:16:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:39.450 10:16:52 -- common/autotest_common.sh@1210 -- # return 0 00:21:39.450 10:16:52 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:39.450 10:16:52 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:39.450 nvmf hotplug test: fio failed as expected 00:21:39.450 10:16:52 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.709 10:16:52 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:39.709 10:16:52 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:39.709 10:16:52 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:39.709 10:16:52 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:39.709 10:16:52 -- target/fio.sh@91 -- # nvmftestfini 00:21:39.709 10:16:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:39.709 10:16:52 -- nvmf/common.sh@116 -- # sync 00:21:39.709 10:16:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:39.709 10:16:52 -- nvmf/common.sh@119 -- # set +e 00:21:39.709 10:16:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:39.709 10:16:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:39.709 rmmod nvme_tcp 00:21:39.709 rmmod nvme_fabrics 00:21:39.709 rmmod nvme_keyring 00:21:39.709 10:16:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:39.709 10:16:52 -- nvmf/common.sh@123 -- # set -e 00:21:39.709 10:16:52 -- nvmf/common.sh@124 -- # return 0 00:21:39.709 10:16:52 -- nvmf/common.sh@477 -- # '[' -n 329294 ']' 00:21:39.709 10:16:52 -- nvmf/common.sh@478 -- # killprocess 329294 00:21:39.709 10:16:52 -- common/autotest_common.sh@926 -- # '[' -z 329294 ']' 00:21:39.709 10:16:52 -- common/autotest_common.sh@930 -- # kill -0 329294 00:21:39.709 10:16:52 -- common/autotest_common.sh@931 -- # uname 00:21:39.709 10:16:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:39.709 10:16:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 329294 00:21:39.709 10:16:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:39.709 10:16:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:39.709 10:16:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 329294' 00:21:39.709 killing process with pid 329294 00:21:39.709 10:16:52 -- common/autotest_common.sh@945 -- # kill 329294 00:21:39.709 10:16:52 -- common/autotest_common.sh@950 -- # wait 329294 00:21:39.968 10:16:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:39.968 10:16:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:39.968 10:16:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:39.968 10:16:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.968 10:16:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:39.968 10:16:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.968 10:16:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.968 10:16:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.505 10:16:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:42.505 00:21:42.505 real 0m26.469s 00:21:42.505 user 1m45.733s 00:21:42.505 sys 0m8.006s 00:21:42.505 10:16:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.505 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:42.505 ************************************ 00:21:42.505 END TEST nvmf_fio_target 00:21:42.505 ************************************ 00:21:42.505 10:16:55 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:42.505 10:16:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:42.505 10:16:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:42.505 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:42.505 ************************************ 00:21:42.505 START TEST nvmf_bdevio 00:21:42.505 ************************************ 00:21:42.505 10:16:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:42.505 * Looking for test storage... 00:21:42.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:42.505 10:16:55 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.505 10:16:55 -- nvmf/common.sh@7 -- # uname -s 00:21:42.505 10:16:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.505 10:16:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.505 10:16:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.505 10:16:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.505 10:16:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.505 10:16:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.505 10:16:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.505 10:16:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.505 10:16:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.505 10:16:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.505 10:16:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.505 10:16:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.505 10:16:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.505 10:16:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.505 10:16:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.505 10:16:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.505 10:16:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.505 10:16:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.505 10:16:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.505 10:16:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.505 10:16:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.505 10:16:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.505 10:16:55 -- paths/export.sh@5 -- # export PATH 00:21:42.505 10:16:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.505 10:16:55 -- nvmf/common.sh@46 -- # : 0 00:21:42.505 10:16:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:42.505 10:16:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:42.505 10:16:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:42.505 10:16:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.505 10:16:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.505 10:16:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:42.505 10:16:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:42.505 10:16:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:42.505 10:16:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.505 10:16:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.505 10:16:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:42.505 10:16:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:42.505 10:16:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.505 10:16:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:42.505 10:16:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:42.505 10:16:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:42.505 10:16:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.505 10:16:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.505 10:16:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.505 10:16:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:42.505 10:16:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:42.505 10:16:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:42.505 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:47.774 10:17:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:47.774 10:17:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:47.774 10:17:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:47.775 10:17:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:47.775 10:17:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:47.775 10:17:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:47.775 10:17:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:47.775 10:17:00 -- nvmf/common.sh@294 -- # net_devs=() 00:21:47.775 10:17:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:47.775 10:17:00 -- nvmf/common.sh@295 -- # e810=() 00:21:47.775 10:17:00 -- nvmf/common.sh@295 -- # local -ga e810 00:21:47.775 10:17:00 -- nvmf/common.sh@296 -- # x722=() 00:21:47.775 10:17:00 -- nvmf/common.sh@296 -- # local -ga x722 00:21:47.775 10:17:00 -- nvmf/common.sh@297 -- # mlx=() 00:21:47.775 10:17:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:47.775 10:17:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.775 10:17:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:47.775 10:17:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:47.775 10:17:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:47.775 10:17:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.775 10:17:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:47.775 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:47.775 10:17:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.775 10:17:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:47.775 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:47.775 10:17:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:47.775 10:17:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.775 10:17:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.775 10:17:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.775 10:17:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.775 10:17:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:47.775 Found net devices under 0000:86:00.0: cvl_0_0 00:21:47.775 10:17:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.775 10:17:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.775 10:17:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.775 10:17:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.775 10:17:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.775 10:17:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:47.775 Found net devices under 0000:86:00.1: cvl_0_1 00:21:47.775 10:17:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.775 10:17:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:47.775 10:17:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:47.775 10:17:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:47.775 10:17:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.775 10:17:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.775 10:17:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.775 10:17:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:47.775 10:17:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.775 10:17:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.775 10:17:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:47.775 10:17:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.775 10:17:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.775 10:17:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:47.775 10:17:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:47.775 10:17:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.775 10:17:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.775 10:17:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.775 10:17:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.775 10:17:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:47.775 10:17:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.775 10:17:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.775 10:17:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.775 10:17:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:47.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:21:47.775 00:21:47.775 --- 10.0.0.2 ping statistics --- 00:21:47.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.775 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:47.775 10:17:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:21:47.775 00:21:47.775 --- 10.0.0.1 ping statistics --- 00:21:47.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.775 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:21:47.775 10:17:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.775 10:17:00 -- nvmf/common.sh@410 -- # return 0 00:21:47.775 10:17:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:47.775 10:17:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.775 10:17:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:47.775 10:17:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.775 10:17:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:47.775 10:17:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:47.775 10:17:00 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:47.775 10:17:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:47.775 10:17:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:47.775 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:21:47.775 10:17:00 -- nvmf/common.sh@469 -- # nvmfpid=336590 00:21:47.775 10:17:00 -- nvmf/common.sh@470 -- # waitforlisten 336590 00:21:47.775 10:17:00 -- common/autotest_common.sh@819 -- # '[' -z 336590 ']' 00:21:47.775 10:17:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.775 10:17:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.775 10:17:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.775 10:17:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.775 10:17:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:47.775 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:21:47.775 [2024-04-24 10:17:00.453559] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:47.775 [2024-04-24 10:17:00.453601] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.775 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.775 [2024-04-24 10:17:00.510876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.775 [2024-04-24 10:17:00.588500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:47.775 [2024-04-24 10:17:00.588608] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.775 [2024-04-24 10:17:00.588617] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.775 [2024-04-24 10:17:00.588624] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.775 [2024-04-24 10:17:00.588732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:47.775 [2024-04-24 10:17:00.588837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:47.775 [2024-04-24 10:17:00.588941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.775 [2024-04-24 10:17:00.588943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:48.034 10:17:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:48.034 10:17:01 -- common/autotest_common.sh@852 -- # return 0 00:21:48.034 10:17:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:48.034 10:17:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:48.034 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:21:48.034 10:17:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.034 10:17:01 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:48.034 10:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.034 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:21:48.034 [2024-04-24 10:17:01.286277] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.034 10:17:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.034 10:17:01 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:48.034 10:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.034 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:21:48.034 Malloc0 00:21:48.034 10:17:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.034 10:17:01 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.034 10:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.034 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:21:48.293 10:17:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.293 10:17:01 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.293 10:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.293 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:21:48.293 10:17:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.293 10:17:01 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.293 10:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.293 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:21:48.293 [2024-04-24 10:17:01.329645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.293 10:17:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.293 10:17:01 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:48.293 10:17:01 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:48.293 10:17:01 -- nvmf/common.sh@520 -- # config=() 00:21:48.293 10:17:01 -- nvmf/common.sh@520 -- # local subsystem config 00:21:48.293 10:17:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:48.293 10:17:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:48.293 { 00:21:48.293 "params": { 00:21:48.293 "name": "Nvme$subsystem", 00:21:48.293 "trtype": "$TEST_TRANSPORT", 00:21:48.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.293 "adrfam": "ipv4", 00:21:48.293 "trsvcid": "$NVMF_PORT", 00:21:48.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.293 "hdgst": ${hdgst:-false}, 00:21:48.293 "ddgst": ${ddgst:-false} 00:21:48.293 }, 00:21:48.293 "method": "bdev_nvme_attach_controller" 00:21:48.293 } 00:21:48.293 EOF 00:21:48.293 )") 00:21:48.293 10:17:01 -- nvmf/common.sh@542 -- # cat 00:21:48.293 10:17:01 -- nvmf/common.sh@544 -- # jq . 00:21:48.293 10:17:01 -- nvmf/common.sh@545 -- # IFS=, 00:21:48.293 10:17:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:48.293 "params": { 00:21:48.293 "name": "Nvme1", 00:21:48.293 "trtype": "tcp", 00:21:48.293 "traddr": "10.0.0.2", 00:21:48.293 "adrfam": "ipv4", 00:21:48.293 "trsvcid": "4420", 00:21:48.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.293 "hdgst": false, 00:21:48.293 "ddgst": false 00:21:48.293 }, 00:21:48.293 "method": "bdev_nvme_attach_controller" 00:21:48.293 }' 00:21:48.293 [2024-04-24 10:17:01.373094] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:48.293 [2024-04-24 10:17:01.373137] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336697 ] 00:21:48.293 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.293 [2024-04-24 10:17:01.428701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:48.293 [2024-04-24 10:17:01.501662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.293 [2024-04-24 10:17:01.501755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.293 [2024-04-24 10:17:01.501757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.551 [2024-04-24 10:17:01.653817] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:48.551 [2024-04-24 10:17:01.653851] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:48.551 I/O targets: 00:21:48.551 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:48.551 00:21:48.551 00:21:48.551 CUnit - A unit testing framework for C - Version 2.1-3 00:21:48.551 http://cunit.sourceforge.net/ 00:21:48.551 00:21:48.551 00:21:48.551 Suite: bdevio tests on: Nvme1n1 00:21:48.551 Test: blockdev write read block ...passed 00:21:48.551 Test: blockdev write zeroes read block ...passed 00:21:48.551 Test: blockdev write zeroes read no split ...passed 00:21:48.551 Test: blockdev write zeroes read split ...passed 00:21:48.809 Test: blockdev write zeroes read split partial ...passed 00:21:48.809 Test: blockdev reset ...[2024-04-24 10:17:01.863977] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.809 [2024-04-24 10:17:01.864038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1793590 (9): Bad file descriptor 00:21:48.809 [2024-04-24 10:17:01.961850] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:48.809 passed 00:21:48.809 Test: blockdev write read 8 blocks ...passed 00:21:48.809 Test: blockdev write read size > 128k ...passed 00:21:48.809 Test: blockdev write read invalid size ...passed 00:21:48.809 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:48.809 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:48.809 Test: blockdev write read max offset ...passed 00:21:49.069 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:49.069 Test: blockdev writev readv 8 blocks ...passed 00:21:49.069 Test: blockdev writev readv 30 x 1block ...passed 00:21:49.069 Test: blockdev writev readv block ...passed 00:21:49.069 Test: blockdev writev readv size > 128k ...passed 00:21:49.069 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:49.069 Test: blockdev comparev and writev ...[2024-04-24 10:17:02.179271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.069 [2024-04-24 10:17:02.179298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.179311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.069 [2024-04-24 10:17:02.179319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.179629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.069 [2024-04-24 10:17:02.179640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.179652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.069 [2024-04-24 10:17:02.179660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.179970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.069 [2024-04-24 10:17:02.179982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.179993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.069 [2024-04-24 10:17:02.180000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.180319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.069 [2024-04-24 10:17:02.180330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.180346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.069 [2024-04-24 10:17:02.180354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:49.069 passed 00:21:49.069 Test: blockdev nvme passthru rw ...passed 00:21:49.069 Test: blockdev nvme passthru vendor specific ...[2024-04-24 10:17:02.263533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.069 [2024-04-24 10:17:02.263550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.263726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.069 [2024-04-24 10:17:02.263736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.263905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.069 [2024-04-24 10:17:02.263915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:49.069 [2024-04-24 10:17:02.264086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.069 [2024-04-24 10:17:02.264097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:49.069 passed 00:21:49.069 Test: blockdev nvme admin passthru ...passed 00:21:49.069 Test: blockdev copy ...passed 00:21:49.069 00:21:49.069 Run Summary: Type Total Ran Passed Failed Inactive 00:21:49.069 suites 1 1 n/a 0 0 00:21:49.069 tests 23 23 23 0 0 00:21:49.069 asserts 152 152 152 0 n/a 00:21:49.069 00:21:49.069 Elapsed time = 1.325 seconds 00:21:49.327 10:17:02 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.327 10:17:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.327 10:17:02 -- common/autotest_common.sh@10 -- # set +x 00:21:49.327 10:17:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.327 10:17:02 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:49.327 10:17:02 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:49.327 10:17:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:49.327 10:17:02 -- nvmf/common.sh@116 -- # sync 00:21:49.327 10:17:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:49.327 10:17:02 -- nvmf/common.sh@119 -- # set +e 00:21:49.327 10:17:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:49.327 10:17:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:49.327 rmmod nvme_tcp 00:21:49.327 rmmod nvme_fabrics 00:21:49.327 rmmod nvme_keyring 00:21:49.327 10:17:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:49.327 10:17:02 -- nvmf/common.sh@123 -- # set -e 00:21:49.327 10:17:02 -- nvmf/common.sh@124 -- # return 0 00:21:49.327 10:17:02 -- nvmf/common.sh@477 -- # '[' -n 336590 ']' 00:21:49.327 10:17:02 -- nvmf/common.sh@478 -- # killprocess 336590 00:21:49.327 10:17:02 -- common/autotest_common.sh@926 -- # '[' -z 336590 ']' 00:21:49.327 10:17:02 -- common/autotest_common.sh@930 -- # kill -0 336590 00:21:49.327 10:17:02 -- common/autotest_common.sh@931 -- # uname 00:21:49.327 10:17:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:49.327 10:17:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 336590 00:21:49.585 10:17:02 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:49.585 10:17:02 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:49.585 10:17:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 336590' 00:21:49.585 killing process with pid 336590 00:21:49.585 10:17:02 -- common/autotest_common.sh@945 -- # kill 336590 00:21:49.585 10:17:02 -- common/autotest_common.sh@950 -- # wait 336590 00:21:49.585 10:17:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:49.585 10:17:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:49.585 10:17:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:49.585 10:17:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.585 10:17:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:49.585 10:17:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.585 10:17:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.585 10:17:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.117 10:17:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:52.117 00:21:52.117 real 0m9.668s 00:21:52.117 user 0m12.475s 00:21:52.117 sys 0m4.275s 00:21:52.117 10:17:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.117 10:17:04 -- common/autotest_common.sh@10 -- # set +x 00:21:52.117 ************************************ 00:21:52.117 END TEST nvmf_bdevio 00:21:52.117 ************************************ 00:21:52.117 10:17:04 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:21:52.117 10:17:04 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:52.117 10:17:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:52.117 10:17:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:52.117 10:17:04 -- common/autotest_common.sh@10 -- # set +x 00:21:52.117 ************************************ 00:21:52.117 START TEST nvmf_bdevio_no_huge 00:21:52.117 ************************************ 00:21:52.117 10:17:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:52.117 * Looking for test storage... 00:21:52.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.117 10:17:05 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.117 10:17:05 -- nvmf/common.sh@7 -- # uname -s 00:21:52.117 10:17:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.117 10:17:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.117 10:17:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.117 10:17:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.117 10:17:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.117 10:17:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.117 10:17:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.118 10:17:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.118 10:17:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.118 10:17:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.118 10:17:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.118 10:17:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.118 10:17:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.118 10:17:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.118 10:17:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.118 10:17:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.118 10:17:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.118 10:17:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.118 10:17:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.118 10:17:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.118 10:17:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.118 10:17:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.118 10:17:05 -- paths/export.sh@5 -- # export PATH 00:21:52.118 10:17:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.118 10:17:05 -- nvmf/common.sh@46 -- # : 0 00:21:52.118 10:17:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:52.118 10:17:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:52.118 10:17:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:52.118 10:17:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.118 10:17:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.118 10:17:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:52.118 10:17:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:52.118 10:17:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:52.118 10:17:05 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.118 10:17:05 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.118 10:17:05 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:52.118 10:17:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:52.118 10:17:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.118 10:17:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:52.118 10:17:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:52.118 10:17:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:52.118 10:17:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.118 10:17:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.118 10:17:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.118 10:17:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:52.118 10:17:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:52.118 10:17:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:52.118 10:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:57.385 10:17:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:57.385 10:17:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:57.385 10:17:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:57.385 10:17:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:57.385 10:17:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:57.385 10:17:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:57.385 10:17:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:57.385 10:17:10 -- nvmf/common.sh@294 -- # net_devs=() 00:21:57.385 10:17:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:57.385 10:17:10 -- nvmf/common.sh@295 -- # e810=() 00:21:57.385 10:17:10 -- nvmf/common.sh@295 -- # local -ga e810 00:21:57.385 10:17:10 -- nvmf/common.sh@296 -- # x722=() 00:21:57.385 10:17:10 -- nvmf/common.sh@296 -- # local -ga x722 00:21:57.385 10:17:10 -- nvmf/common.sh@297 -- # mlx=() 00:21:57.385 10:17:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:57.385 10:17:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.385 10:17:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:57.385 10:17:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:57.385 10:17:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:57.385 10:17:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:57.385 10:17:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:57.385 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:57.385 10:17:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:57.385 10:17:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:57.385 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:57.385 10:17:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:57.385 10:17:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.385 10:17:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.385 10:17:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.385 10:17:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.385 10:17:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:57.385 Found net devices under 0000:86:00.0: cvl_0_0 00:21:57.385 10:17:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.385 10:17:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.385 10:17:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.385 10:17:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.385 10:17:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.385 10:17:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:57.385 Found net devices under 0000:86:00.1: cvl_0_1 00:21:57.385 10:17:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.385 10:17:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:57.385 10:17:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:57.385 10:17:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:57.385 10:17:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:57.385 10:17:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.385 10:17:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.385 10:17:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.385 10:17:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:57.385 10:17:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.385 10:17:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.385 10:17:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:57.385 10:17:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.385 10:17:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.385 10:17:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:57.385 10:17:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:57.385 10:17:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.385 10:17:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.385 10:17:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.385 10:17:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.385 10:17:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:57.385 10:17:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.644 10:17:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.644 10:17:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.644 10:17:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:57.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:57.644 00:21:57.644 --- 10.0.0.2 ping statistics --- 00:21:57.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.644 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:57.644 10:17:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:21:57.644 00:21:57.644 --- 10.0.0.1 ping statistics --- 00:21:57.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.644 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:21:57.644 10:17:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.644 10:17:10 -- nvmf/common.sh@410 -- # return 0 00:21:57.644 10:17:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:57.644 10:17:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.644 10:17:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:57.644 10:17:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:57.644 10:17:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.644 10:17:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:57.644 10:17:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:57.644 10:17:10 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:57.644 10:17:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:57.644 10:17:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:57.644 10:17:10 -- common/autotest_common.sh@10 -- # set +x 00:21:57.644 10:17:10 -- nvmf/common.sh@469 -- # nvmfpid=340462 00:21:57.644 10:17:10 -- nvmf/common.sh@470 -- # waitforlisten 340462 00:21:57.644 10:17:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:57.644 10:17:10 -- common/autotest_common.sh@819 -- # '[' -z 340462 ']' 00:21:57.644 10:17:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.644 10:17:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:57.644 10:17:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.644 10:17:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:57.644 10:17:10 -- common/autotest_common.sh@10 -- # set +x 00:21:57.644 [2024-04-24 10:17:10.795336] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:57.644 [2024-04-24 10:17:10.795377] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:57.644 [2024-04-24 10:17:10.857641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.902 [2024-04-24 10:17:10.939856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.902 [2024-04-24 10:17:10.939966] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.902 [2024-04-24 10:17:10.939975] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.902 [2024-04-24 10:17:10.939982] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.902 [2024-04-24 10:17:10.940106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:57.902 [2024-04-24 10:17:10.940216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:57.902 [2024-04-24 10:17:10.940322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.902 [2024-04-24 10:17:10.940323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:58.468 10:17:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:58.468 10:17:11 -- common/autotest_common.sh@852 -- # return 0 00:21:58.468 10:17:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:58.468 10:17:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:58.468 10:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.468 10:17:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.468 10:17:11 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.468 10:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.468 10:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.468 [2024-04-24 10:17:11.648589] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.468 10:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.468 10:17:11 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:58.468 10:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.468 10:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.468 Malloc0 00:21:58.468 10:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.468 10:17:11 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.468 10:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.468 10:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.468 10:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.468 10:17:11 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:58.468 10:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.468 10:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.468 10:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.468 10:17:11 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.468 10:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.468 10:17:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.468 [2024-04-24 10:17:11.688867] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.468 10:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.468 10:17:11 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:58.468 10:17:11 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:58.468 10:17:11 -- nvmf/common.sh@520 -- # config=() 00:21:58.468 10:17:11 -- nvmf/common.sh@520 -- # local subsystem config 00:21:58.468 10:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:58.468 10:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:58.468 { 00:21:58.468 "params": { 00:21:58.468 "name": "Nvme$subsystem", 00:21:58.468 "trtype": "$TEST_TRANSPORT", 00:21:58.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.468 "adrfam": "ipv4", 00:21:58.468 "trsvcid": "$NVMF_PORT", 00:21:58.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.468 "hdgst": ${hdgst:-false}, 00:21:58.468 "ddgst": ${ddgst:-false} 00:21:58.468 }, 00:21:58.468 "method": "bdev_nvme_attach_controller" 00:21:58.468 } 00:21:58.468 EOF 00:21:58.468 )") 00:21:58.468 10:17:11 -- nvmf/common.sh@542 -- # cat 00:21:58.468 10:17:11 -- nvmf/common.sh@544 -- # jq . 00:21:58.468 10:17:11 -- nvmf/common.sh@545 -- # IFS=, 00:21:58.468 10:17:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:58.468 "params": { 00:21:58.468 "name": "Nvme1", 00:21:58.468 "trtype": "tcp", 00:21:58.468 "traddr": "10.0.0.2", 00:21:58.468 "adrfam": "ipv4", 00:21:58.468 "trsvcid": "4420", 00:21:58.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.468 "hdgst": false, 00:21:58.468 "ddgst": false 00:21:58.468 }, 00:21:58.468 "method": "bdev_nvme_attach_controller" 00:21:58.468 }' 00:21:58.468 [2024-04-24 10:17:11.732545] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:58.468 [2024-04-24 10:17:11.732588] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid340713 ] 00:21:58.726 [2024-04-24 10:17:11.790625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:58.726 [2024-04-24 10:17:11.874955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.726 [2024-04-24 10:17:11.875048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.726 [2024-04-24 10:17:11.875050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.984 [2024-04-24 10:17:12.131053] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:58.984 [2024-04-24 10:17:12.131086] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:58.984 I/O targets: 00:21:58.984 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:58.984 00:21:58.984 00:21:58.984 CUnit - A unit testing framework for C - Version 2.1-3 00:21:58.984 http://cunit.sourceforge.net/ 00:21:58.984 00:21:58.984 00:21:58.984 Suite: bdevio tests on: Nvme1n1 00:21:58.984 Test: blockdev write read block ...passed 00:21:58.984 Test: blockdev write zeroes read block ...passed 00:21:58.984 Test: blockdev write zeroes read no split ...passed 00:21:59.244 Test: blockdev write zeroes read split ...passed 00:21:59.244 Test: blockdev write zeroes read split partial ...passed 00:21:59.244 Test: blockdev reset ...[2024-04-24 10:17:12.347410] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:59.244 [2024-04-24 10:17:12.347470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeeea0 (9): Bad file descriptor 00:21:59.244 [2024-04-24 10:17:12.451812] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:59.244 passed 00:21:59.244 Test: blockdev write read 8 blocks ...passed 00:21:59.244 Test: blockdev write read size > 128k ...passed 00:21:59.244 Test: blockdev write read invalid size ...passed 00:21:59.244 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:59.244 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:59.244 Test: blockdev write read max offset ...passed 00:21:59.501 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:59.501 Test: blockdev writev readv 8 blocks ...passed 00:21:59.501 Test: blockdev writev readv 30 x 1block ...passed 00:21:59.501 Test: blockdev writev readv block ...passed 00:21:59.501 Test: blockdev writev readv size > 128k ...passed 00:21:59.501 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:59.501 Test: blockdev comparev and writev ...[2024-04-24 10:17:12.626057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.501 [2024-04-24 10:17:12.626089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.626103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.501 [2024-04-24 10:17:12.626111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.626416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.501 [2024-04-24 10:17:12.626427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.626439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.501 [2024-04-24 10:17:12.626446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.626733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.501 [2024-04-24 10:17:12.626743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.626754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.501 [2024-04-24 10:17:12.626761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.627062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.501 [2024-04-24 10:17:12.627078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.627090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.501 [2024-04-24 10:17:12.627098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:59.501 passed 00:21:59.501 Test: blockdev nvme passthru rw ...passed 00:21:59.501 Test: blockdev nvme passthru vendor specific ...[2024-04-24 10:17:12.710494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.501 [2024-04-24 10:17:12.710516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.710693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.501 [2024-04-24 10:17:12.710703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.710877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.501 [2024-04-24 10:17:12.710887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:59.501 [2024-04-24 10:17:12.711053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.501 [2024-04-24 10:17:12.711062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:59.501 passed 00:21:59.501 Test: blockdev nvme admin passthru ...passed 00:21:59.501 Test: blockdev copy ...passed 00:21:59.501 00:21:59.501 Run Summary: Type Total Ran Passed Failed Inactive 00:21:59.501 suites 1 1 n/a 0 0 00:21:59.501 tests 23 23 23 0 0 00:21:59.501 asserts 152 152 152 0 n/a 00:21:59.501 00:21:59.501 Elapsed time = 1.277 seconds 00:22:00.067 10:17:13 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:00.067 10:17:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.067 10:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:00.067 10:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.067 10:17:13 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:00.067 10:17:13 -- target/bdevio.sh@30 -- # nvmftestfini 00:22:00.067 10:17:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:00.067 10:17:13 -- nvmf/common.sh@116 -- # sync 00:22:00.067 10:17:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:00.067 10:17:13 -- nvmf/common.sh@119 -- # set +e 00:22:00.067 10:17:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:00.067 10:17:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:00.067 rmmod nvme_tcp 00:22:00.067 rmmod nvme_fabrics 00:22:00.067 rmmod nvme_keyring 00:22:00.067 10:17:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:00.067 10:17:13 -- nvmf/common.sh@123 -- # set -e 00:22:00.067 10:17:13 -- nvmf/common.sh@124 -- # return 0 00:22:00.067 10:17:13 -- nvmf/common.sh@477 -- # '[' -n 340462 ']' 00:22:00.067 10:17:13 -- nvmf/common.sh@478 -- # killprocess 340462 00:22:00.067 10:17:13 -- common/autotest_common.sh@926 -- # '[' -z 340462 ']' 00:22:00.067 10:17:13 -- common/autotest_common.sh@930 -- # kill -0 340462 00:22:00.067 10:17:13 -- common/autotest_common.sh@931 -- # uname 00:22:00.067 10:17:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.067 10:17:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 340462 00:22:00.067 10:17:13 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:22:00.067 10:17:13 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:22:00.067 10:17:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 340462' 00:22:00.067 killing process with pid 340462 00:22:00.067 10:17:13 -- common/autotest_common.sh@945 -- # kill 340462 00:22:00.067 10:17:13 -- common/autotest_common.sh@950 -- # wait 340462 00:22:00.325 10:17:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:00.325 10:17:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:00.325 10:17:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:00.325 10:17:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.325 10:17:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:00.325 10:17:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.325 10:17:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.325 10:17:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.858 10:17:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:02.858 00:22:02.858 real 0m10.623s 00:22:02.858 user 0m14.073s 00:22:02.858 sys 0m5.154s 00:22:02.858 10:17:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:02.858 10:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:02.858 ************************************ 00:22:02.858 END TEST nvmf_bdevio_no_huge 00:22:02.858 ************************************ 00:22:02.858 10:17:15 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:02.858 10:17:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:02.858 10:17:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:02.858 10:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:02.858 ************************************ 00:22:02.858 START TEST nvmf_tls 00:22:02.858 ************************************ 00:22:02.858 10:17:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:02.858 * Looking for test storage... 00:22:02.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.858 10:17:15 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.858 10:17:15 -- nvmf/common.sh@7 -- # uname -s 00:22:02.858 10:17:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.858 10:17:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.858 10:17:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.858 10:17:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.858 10:17:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.858 10:17:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.858 10:17:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.858 10:17:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.858 10:17:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.858 10:17:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.858 10:17:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.858 10:17:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.858 10:17:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.858 10:17:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.858 10:17:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.858 10:17:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.858 10:17:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.858 10:17:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.858 10:17:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.858 10:17:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.858 10:17:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.858 10:17:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.858 10:17:15 -- paths/export.sh@5 -- # export PATH 00:22:02.858 10:17:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.858 10:17:15 -- nvmf/common.sh@46 -- # : 0 00:22:02.858 10:17:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:02.858 10:17:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:02.858 10:17:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:02.858 10:17:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.858 10:17:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.858 10:17:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:02.858 10:17:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:02.858 10:17:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:02.858 10:17:15 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:02.858 10:17:15 -- target/tls.sh@71 -- # nvmftestinit 00:22:02.858 10:17:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:02.858 10:17:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.858 10:17:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:02.858 10:17:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:02.858 10:17:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:02.858 10:17:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.858 10:17:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.858 10:17:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.858 10:17:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:02.858 10:17:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:02.858 10:17:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:02.858 10:17:15 -- common/autotest_common.sh@10 -- # set +x 00:22:08.152 10:17:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:08.152 10:17:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:08.152 10:17:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:08.152 10:17:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:08.152 10:17:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:08.152 10:17:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:08.152 10:17:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:08.152 10:17:20 -- nvmf/common.sh@294 -- # net_devs=() 00:22:08.152 10:17:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:08.152 10:17:20 -- nvmf/common.sh@295 -- # e810=() 00:22:08.152 10:17:20 -- nvmf/common.sh@295 -- # local -ga e810 00:22:08.152 10:17:20 -- nvmf/common.sh@296 -- # x722=() 00:22:08.152 10:17:20 -- nvmf/common.sh@296 -- # local -ga x722 00:22:08.152 10:17:20 -- nvmf/common.sh@297 -- # mlx=() 00:22:08.152 10:17:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:08.152 10:17:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.152 10:17:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:08.152 10:17:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:08.152 10:17:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:08.152 10:17:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:08.152 10:17:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:08.152 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:08.152 10:17:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:08.152 10:17:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:08.152 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:08.152 10:17:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:08.152 10:17:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:08.152 10:17:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.152 10:17:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:08.152 10:17:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.152 10:17:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:08.152 Found net devices under 0000:86:00.0: cvl_0_0 00:22:08.152 10:17:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.152 10:17:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:08.152 10:17:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.152 10:17:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:08.152 10:17:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.152 10:17:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:08.152 Found net devices under 0000:86:00.1: cvl_0_1 00:22:08.152 10:17:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.152 10:17:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:08.152 10:17:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:08.152 10:17:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:08.152 10:17:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:08.152 10:17:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.152 10:17:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.152 10:17:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.152 10:17:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:08.152 10:17:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.152 10:17:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.152 10:17:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:08.152 10:17:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.152 10:17:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.152 10:17:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:08.152 10:17:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:08.152 10:17:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.152 10:17:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.152 10:17:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.152 10:17:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.152 10:17:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:08.152 10:17:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.152 10:17:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.152 10:17:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.152 10:17:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:08.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:22:08.152 00:22:08.152 --- 10.0.0.2 ping statistics --- 00:22:08.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.152 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:22:08.152 10:17:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:22:08.152 00:22:08.152 --- 10.0.0.1 ping statistics --- 00:22:08.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.152 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:08.152 10:17:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.152 10:17:21 -- nvmf/common.sh@410 -- # return 0 00:22:08.152 10:17:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:08.152 10:17:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.152 10:17:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:08.152 10:17:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:08.152 10:17:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.152 10:17:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:08.152 10:17:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:08.152 10:17:21 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:08.152 10:17:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:08.152 10:17:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:08.152 10:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:08.152 10:17:21 -- nvmf/common.sh@469 -- # nvmfpid=344287 00:22:08.152 10:17:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:08.152 10:17:21 -- nvmf/common.sh@470 -- # waitforlisten 344287 00:22:08.152 10:17:21 -- common/autotest_common.sh@819 -- # '[' -z 344287 ']' 00:22:08.152 10:17:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.152 10:17:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:08.152 10:17:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.152 10:17:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:08.152 10:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:08.152 [2024-04-24 10:17:21.173870] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:08.152 [2024-04-24 10:17:21.173910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.152 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.152 [2024-04-24 10:17:21.232344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.152 [2024-04-24 10:17:21.307674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:08.152 [2024-04-24 10:17:21.307787] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.152 [2024-04-24 10:17:21.307796] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.152 [2024-04-24 10:17:21.307802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.153 [2024-04-24 10:17:21.307818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.719 10:17:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:08.719 10:17:21 -- common/autotest_common.sh@852 -- # return 0 00:22:08.719 10:17:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:08.719 10:17:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:08.719 10:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:08.977 10:17:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.977 10:17:22 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:22:08.977 10:17:22 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:08.977 true 00:22:08.977 10:17:22 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:08.977 10:17:22 -- target/tls.sh@82 -- # jq -r .tls_version 00:22:09.235 10:17:22 -- target/tls.sh@82 -- # version=0 00:22:09.235 10:17:22 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:22:09.235 10:17:22 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:09.235 10:17:22 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:09.235 10:17:22 -- target/tls.sh@90 -- # jq -r .tls_version 00:22:09.493 10:17:22 -- target/tls.sh@90 -- # version=13 00:22:09.493 10:17:22 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:22:09.493 10:17:22 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:09.751 10:17:22 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:09.751 10:17:22 -- target/tls.sh@98 -- # jq -r .tls_version 00:22:09.751 10:17:23 -- target/tls.sh@98 -- # version=7 00:22:09.751 10:17:23 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:22:09.751 10:17:23 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:09.751 10:17:23 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:10.009 10:17:23 -- target/tls.sh@105 -- # ktls=false 00:22:10.009 10:17:23 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:22:10.009 10:17:23 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:10.285 10:17:23 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:10.285 10:17:23 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:10.285 10:17:23 -- target/tls.sh@113 -- # ktls=true 00:22:10.285 10:17:23 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:22:10.285 10:17:23 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:10.551 10:17:23 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:10.551 10:17:23 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:22:10.551 10:17:23 -- target/tls.sh@121 -- # ktls=false 00:22:10.551 10:17:23 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:22:10.551 10:17:23 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:22:10.551 10:17:23 -- target/tls.sh@49 -- # local key hash crc 00:22:10.551 10:17:23 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:22:10.551 10:17:23 -- target/tls.sh@51 -- # hash=01 00:22:10.551 10:17:23 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:22:10.551 10:17:23 -- target/tls.sh@52 -- # gzip -1 -c 00:22:10.551 10:17:23 -- target/tls.sh@52 -- # tail -c8 00:22:10.551 10:17:23 -- target/tls.sh@52 -- # head -c 4 00:22:10.551 10:17:23 -- target/tls.sh@52 -- # crc='p$H�' 00:22:10.809 10:17:23 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:22:10.809 10:17:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:10.810 10:17:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:10.810 10:17:23 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:10.810 10:17:23 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:22:10.810 10:17:23 -- target/tls.sh@49 -- # local key hash crc 00:22:10.810 10:17:23 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:22:10.810 10:17:23 -- target/tls.sh@51 -- # hash=01 00:22:10.810 10:17:23 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:22:10.810 10:17:23 -- target/tls.sh@52 -- # gzip -1 -c 00:22:10.810 10:17:23 -- target/tls.sh@52 -- # tail -c8 00:22:10.810 10:17:23 -- target/tls.sh@52 -- # head -c 4 00:22:10.810 10:17:23 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:22:10.810 10:17:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:10.810 10:17:23 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:22:10.810 10:17:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:10.810 10:17:23 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:10.810 10:17:23 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:10.810 10:17:23 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:10.810 10:17:23 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:10.810 10:17:23 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:10.810 10:17:23 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:10.810 10:17:23 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:10.810 10:17:23 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:10.810 10:17:24 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:11.068 10:17:24 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:11.068 10:17:24 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:11.068 10:17:24 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:11.327 [2024-04-24 10:17:24.417047] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.327 10:17:24 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:11.327 10:17:24 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:11.585 [2024-04-24 10:17:24.753902] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:11.585 [2024-04-24 10:17:24.754080] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.585 10:17:24 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:11.842 malloc0 00:22:11.842 10:17:24 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:11.842 10:17:25 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:12.100 10:17:25 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:12.100 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.118 Initializing NVMe Controllers 00:22:22.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.118 Initialization complete. Launching workers. 00:22:22.118 ======================================================== 00:22:22.118 Latency(us) 00:22:22.118 Device Information : IOPS MiB/s Average min max 00:22:22.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17098.76 66.79 3743.26 783.87 4725.93 00:22:22.118 ======================================================== 00:22:22.118 Total : 17098.76 66.79 3743.26 783.87 4725.93 00:22:22.118 00:22:22.118 10:17:35 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:22.118 10:17:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.118 10:17:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:22.118 10:17:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:22.118 10:17:35 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:22.118 10:17:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.118 10:17:35 -- target/tls.sh@28 -- # bdevperf_pid=346808 00:22:22.118 10:17:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.118 10:17:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.118 10:17:35 -- target/tls.sh@31 -- # waitforlisten 346808 /var/tmp/bdevperf.sock 00:22:22.118 10:17:35 -- common/autotest_common.sh@819 -- # '[' -z 346808 ']' 00:22:22.118 10:17:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.118 10:17:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:22.118 10:17:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.118 10:17:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:22.118 10:17:35 -- common/autotest_common.sh@10 -- # set +x 00:22:22.118 [2024-04-24 10:17:35.394232] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:22.119 [2024-04-24 10:17:35.394280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346808 ] 00:22:22.377 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.377 [2024-04-24 10:17:35.445278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.377 [2024-04-24 10:17:35.514444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.963 10:17:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:22.963 10:17:36 -- common/autotest_common.sh@852 -- # return 0 00:22:22.963 10:17:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:23.221 [2024-04-24 10:17:36.344090] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.221 TLSTESTn1 00:22:23.221 10:17:36 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:23.479 Running I/O for 10 seconds... 00:22:33.461 00:22:33.461 Latency(us) 00:22:33.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.461 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:33.461 Verification LBA range: start 0x0 length 0x2000 00:22:33.461 TLSTESTn1 : 10.01 4048.67 15.82 0.00 0.00 31587.87 3732.70 54024.46 00:22:33.461 =================================================================================================================== 00:22:33.461 Total : 4048.67 15.82 0.00 0.00 31587.87 3732.70 54024.46 00:22:33.461 0 00:22:33.461 10:17:46 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.461 10:17:46 -- target/tls.sh@45 -- # killprocess 346808 00:22:33.461 10:17:46 -- common/autotest_common.sh@926 -- # '[' -z 346808 ']' 00:22:33.461 10:17:46 -- common/autotest_common.sh@930 -- # kill -0 346808 00:22:33.461 10:17:46 -- common/autotest_common.sh@931 -- # uname 00:22:33.461 10:17:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:33.461 10:17:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 346808 00:22:33.461 10:17:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:33.461 10:17:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:33.461 10:17:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 346808' 00:22:33.461 killing process with pid 346808 00:22:33.461 10:17:46 -- common/autotest_common.sh@945 -- # kill 346808 00:22:33.461 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.461 00:22:33.461 Latency(us) 00:22:33.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.461 =================================================================================================================== 00:22:33.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.461 10:17:46 -- common/autotest_common.sh@950 -- # wait 346808 00:22:33.727 10:17:46 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:33.727 10:17:46 -- common/autotest_common.sh@640 -- # local es=0 00:22:33.727 10:17:46 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:33.727 10:17:46 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:33.727 10:17:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.727 10:17:46 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:33.727 10:17:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.727 10:17:46 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:33.727 10:17:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:33.727 10:17:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:33.727 10:17:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:33.727 10:17:46 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:22:33.727 10:17:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:33.727 10:17:46 -- target/tls.sh@28 -- # bdevperf_pid=348750 00:22:33.727 10:17:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.727 10:17:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:33.727 10:17:46 -- target/tls.sh@31 -- # waitforlisten 348750 /var/tmp/bdevperf.sock 00:22:33.727 10:17:46 -- common/autotest_common.sh@819 -- # '[' -z 348750 ']' 00:22:33.727 10:17:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.727 10:17:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:33.727 10:17:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.727 10:17:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:33.727 10:17:46 -- common/autotest_common.sh@10 -- # set +x 00:22:33.727 [2024-04-24 10:17:46.873183] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:33.727 [2024-04-24 10:17:46.873234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348750 ] 00:22:33.727 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.727 [2024-04-24 10:17:46.923301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.727 [2024-04-24 10:17:46.987679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.661 10:17:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:34.661 10:17:47 -- common/autotest_common.sh@852 -- # return 0 00:22:34.661 10:17:47 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:34.661 [2024-04-24 10:17:47.822509] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.661 [2024-04-24 10:17:47.831412] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:34.661 [2024-04-24 10:17:47.831817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf360c0 (107): Transport endpoint is not connected 00:22:34.661 [2024-04-24 10:17:47.832810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf360c0 (9): Bad file descriptor 00:22:34.661 [2024-04-24 10:17:47.833811] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.661 [2024-04-24 10:17:47.833821] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:34.661 [2024-04-24 10:17:47.833829] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.661 request: 00:22:34.661 { 00:22:34.661 "name": "TLSTEST", 00:22:34.661 "trtype": "tcp", 00:22:34.661 "traddr": "10.0.0.2", 00:22:34.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.661 "adrfam": "ipv4", 00:22:34.661 "trsvcid": "4420", 00:22:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.661 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:22:34.661 "method": "bdev_nvme_attach_controller", 00:22:34.661 "req_id": 1 00:22:34.661 } 00:22:34.661 Got JSON-RPC error response 00:22:34.661 response: 00:22:34.661 { 00:22:34.661 "code": -32602, 00:22:34.661 "message": "Invalid parameters" 00:22:34.661 } 00:22:34.661 10:17:47 -- target/tls.sh@36 -- # killprocess 348750 00:22:34.661 10:17:47 -- common/autotest_common.sh@926 -- # '[' -z 348750 ']' 00:22:34.661 10:17:47 -- common/autotest_common.sh@930 -- # kill -0 348750 00:22:34.661 10:17:47 -- common/autotest_common.sh@931 -- # uname 00:22:34.661 10:17:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:34.661 10:17:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 348750 00:22:34.661 10:17:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:34.661 10:17:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:34.661 10:17:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 348750' 00:22:34.661 killing process with pid 348750 00:22:34.661 10:17:47 -- common/autotest_common.sh@945 -- # kill 348750 00:22:34.661 Received shutdown signal, test time was about 7.768586 seconds 00:22:34.661 00:22:34.661 Latency(us) 00:22:34.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.661 =================================================================================================================== 00:22:34.661 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.661 10:17:47 -- common/autotest_common.sh@950 -- # wait 348750 00:22:34.919 10:17:48 -- target/tls.sh@37 -- # return 1 00:22:34.919 10:17:48 -- common/autotest_common.sh@643 -- # es=1 00:22:34.919 10:17:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:34.919 10:17:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:34.919 10:17:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:34.919 10:17:48 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:34.919 10:17:48 -- common/autotest_common.sh@640 -- # local es=0 00:22:34.919 10:17:48 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:34.919 10:17:48 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:34.919 10:17:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:34.919 10:17:48 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:34.919 10:17:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:34.919 10:17:48 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:34.919 10:17:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:34.919 10:17:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:34.919 10:17:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:34.919 10:17:48 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:34.919 10:17:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.919 10:17:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:34.919 10:17:48 -- target/tls.sh@28 -- # bdevperf_pid=348981 00:22:34.919 10:17:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.919 10:17:48 -- target/tls.sh@31 -- # waitforlisten 348981 /var/tmp/bdevperf.sock 00:22:34.919 10:17:48 -- common/autotest_common.sh@819 -- # '[' -z 348981 ']' 00:22:34.919 10:17:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.919 10:17:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:34.919 10:17:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.919 10:17:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:34.919 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:22:34.919 [2024-04-24 10:17:48.122780] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:34.919 [2024-04-24 10:17:48.122828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348981 ] 00:22:34.919 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.919 [2024-04-24 10:17:48.172790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.177 [2024-04-24 10:17:48.243832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.743 10:17:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:35.743 10:17:48 -- common/autotest_common.sh@852 -- # return 0 00:22:35.743 10:17:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:36.001 [2024-04-24 10:17:49.085376] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.001 [2024-04-24 10:17:49.094536] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:36.001 [2024-04-24 10:17:49.094560] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:36.001 [2024-04-24 10:17:49.094585] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:36.001 [2024-04-24 10:17:49.095684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e20c0 (107): Transport endpoint is not connected 00:22:36.001 [2024-04-24 10:17:49.096678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e20c0 (9): Bad file descriptor 00:22:36.001 [2024-04-24 10:17:49.097680] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:36.001 [2024-04-24 10:17:49.097690] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:36.001 [2024-04-24 10:17:49.097699] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:36.001 request: 00:22:36.001 { 00:22:36.001 "name": "TLSTEST", 00:22:36.001 "trtype": "tcp", 00:22:36.001 "traddr": "10.0.0.2", 00:22:36.001 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:36.001 "adrfam": "ipv4", 00:22:36.001 "trsvcid": "4420", 00:22:36.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.001 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:36.001 "method": "bdev_nvme_attach_controller", 00:22:36.001 "req_id": 1 00:22:36.001 } 00:22:36.001 Got JSON-RPC error response 00:22:36.001 response: 00:22:36.001 { 00:22:36.001 "code": -32602, 00:22:36.001 "message": "Invalid parameters" 00:22:36.001 } 00:22:36.001 10:17:49 -- target/tls.sh@36 -- # killprocess 348981 00:22:36.001 10:17:49 -- common/autotest_common.sh@926 -- # '[' -z 348981 ']' 00:22:36.001 10:17:49 -- common/autotest_common.sh@930 -- # kill -0 348981 00:22:36.001 10:17:49 -- common/autotest_common.sh@931 -- # uname 00:22:36.001 10:17:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:36.001 10:17:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 348981 00:22:36.001 10:17:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:36.001 10:17:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:36.001 10:17:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 348981' 00:22:36.001 killing process with pid 348981 00:22:36.001 10:17:49 -- common/autotest_common.sh@945 -- # kill 348981 00:22:36.001 Received shutdown signal, test time was about 9.029981 seconds 00:22:36.001 00:22:36.001 Latency(us) 00:22:36.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.001 =================================================================================================================== 00:22:36.001 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:36.001 10:17:49 -- common/autotest_common.sh@950 -- # wait 348981 00:22:36.260 10:17:49 -- target/tls.sh@37 -- # return 1 00:22:36.260 10:17:49 -- common/autotest_common.sh@643 -- # es=1 00:22:36.260 10:17:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:36.260 10:17:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:36.260 10:17:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:36.260 10:17:49 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:36.260 10:17:49 -- common/autotest_common.sh@640 -- # local es=0 00:22:36.260 10:17:49 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:36.260 10:17:49 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:36.260 10:17:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:36.260 10:17:49 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:36.260 10:17:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:36.260 10:17:49 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:36.260 10:17:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.260 10:17:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:36.260 10:17:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.260 10:17:49 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:36.260 10:17:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.260 10:17:49 -- target/tls.sh@28 -- # bdevperf_pid=349139 00:22:36.261 10:17:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.261 10:17:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.261 10:17:49 -- target/tls.sh@31 -- # waitforlisten 349139 /var/tmp/bdevperf.sock 00:22:36.261 10:17:49 -- common/autotest_common.sh@819 -- # '[' -z 349139 ']' 00:22:36.261 10:17:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.261 10:17:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:36.261 10:17:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.261 10:17:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:36.261 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:22:36.261 [2024-04-24 10:17:49.406178] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:36.261 [2024-04-24 10:17:49.406226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349139 ] 00:22:36.261 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.261 [2024-04-24 10:17:49.457143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.261 [2024-04-24 10:17:49.525821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.194 10:17:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:37.194 10:17:50 -- common/autotest_common.sh@852 -- # return 0 00:22:37.194 10:17:50 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:37.194 [2024-04-24 10:17:50.360559] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.194 [2024-04-24 10:17:50.367792] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:37.194 [2024-04-24 10:17:50.367816] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:37.194 [2024-04-24 10:17:50.367841] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:37.194 [2024-04-24 10:17:50.367989] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21770c0 (107): Transport endpoint is not connected 00:22:37.194 [2024-04-24 10:17:50.368982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21770c0 (9): Bad file descriptor 00:22:37.194 [2024-04-24 10:17:50.369983] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:37.194 [2024-04-24 10:17:50.369994] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:37.194 [2024-04-24 10:17:50.370003] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:37.194 request: 00:22:37.194 { 00:22:37.194 "name": "TLSTEST", 00:22:37.194 "trtype": "tcp", 00:22:37.194 "traddr": "10.0.0.2", 00:22:37.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.194 "adrfam": "ipv4", 00:22:37.194 "trsvcid": "4420", 00:22:37.194 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:37.194 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:37.194 "method": "bdev_nvme_attach_controller", 00:22:37.194 "req_id": 1 00:22:37.194 } 00:22:37.194 Got JSON-RPC error response 00:22:37.194 response: 00:22:37.194 { 00:22:37.194 "code": -32602, 00:22:37.194 "message": "Invalid parameters" 00:22:37.194 } 00:22:37.194 10:17:50 -- target/tls.sh@36 -- # killprocess 349139 00:22:37.194 10:17:50 -- common/autotest_common.sh@926 -- # '[' -z 349139 ']' 00:22:37.194 10:17:50 -- common/autotest_common.sh@930 -- # kill -0 349139 00:22:37.194 10:17:50 -- common/autotest_common.sh@931 -- # uname 00:22:37.194 10:17:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:37.194 10:17:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 349139 00:22:37.194 10:17:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:37.194 10:17:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:37.194 10:17:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 349139' 00:22:37.194 killing process with pid 349139 00:22:37.194 10:17:50 -- common/autotest_common.sh@945 -- # kill 349139 00:22:37.194 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.194 00:22:37.194 Latency(us) 00:22:37.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.195 =================================================================================================================== 00:22:37.195 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.195 10:17:50 -- common/autotest_common.sh@950 -- # wait 349139 00:22:37.454 10:17:50 -- target/tls.sh@37 -- # return 1 00:22:37.454 10:17:50 -- common/autotest_common.sh@643 -- # es=1 00:22:37.454 10:17:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:37.454 10:17:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:37.454 10:17:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:37.454 10:17:50 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:37.454 10:17:50 -- common/autotest_common.sh@640 -- # local es=0 00:22:37.454 10:17:50 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:37.454 10:17:50 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:37.454 10:17:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:37.454 10:17:50 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:37.454 10:17:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:37.454 10:17:50 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:37.454 10:17:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:37.454 10:17:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:37.454 10:17:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:37.454 10:17:50 -- target/tls.sh@23 -- # psk= 00:22:37.454 10:17:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.454 10:17:50 -- target/tls.sh@28 -- # bdevperf_pid=349306 00:22:37.454 10:17:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.454 10:17:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:37.454 10:17:50 -- target/tls.sh@31 -- # waitforlisten 349306 /var/tmp/bdevperf.sock 00:22:37.454 10:17:50 -- common/autotest_common.sh@819 -- # '[' -z 349306 ']' 00:22:37.454 10:17:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.454 10:17:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:37.454 10:17:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.454 10:17:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:37.454 10:17:50 -- common/autotest_common.sh@10 -- # set +x 00:22:37.454 [2024-04-24 10:17:50.681689] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:37.454 [2024-04-24 10:17:50.681738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349306 ] 00:22:37.454 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.454 [2024-04-24 10:17:50.733341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.712 [2024-04-24 10:17:50.801320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.278 10:17:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:38.278 10:17:51 -- common/autotest_common.sh@852 -- # return 0 00:22:38.278 10:17:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:38.536 [2024-04-24 10:17:51.641033] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:38.536 [2024-04-24 10:17:51.643214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1439740 (9): Bad file descriptor 00:22:38.536 [2024-04-24 10:17:51.644211] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:38.536 [2024-04-24 10:17:51.644223] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:38.536 [2024-04-24 10:17:51.644231] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.536 request: 00:22:38.536 { 00:22:38.536 "name": "TLSTEST", 00:22:38.536 "trtype": "tcp", 00:22:38.536 "traddr": "10.0.0.2", 00:22:38.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.536 "adrfam": "ipv4", 00:22:38.536 "trsvcid": "4420", 00:22:38.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.536 "method": "bdev_nvme_attach_controller", 00:22:38.536 "req_id": 1 00:22:38.536 } 00:22:38.536 Got JSON-RPC error response 00:22:38.536 response: 00:22:38.536 { 00:22:38.536 "code": -32602, 00:22:38.536 "message": "Invalid parameters" 00:22:38.536 } 00:22:38.536 10:17:51 -- target/tls.sh@36 -- # killprocess 349306 00:22:38.536 10:17:51 -- common/autotest_common.sh@926 -- # '[' -z 349306 ']' 00:22:38.536 10:17:51 -- common/autotest_common.sh@930 -- # kill -0 349306 00:22:38.536 10:17:51 -- common/autotest_common.sh@931 -- # uname 00:22:38.536 10:17:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:38.536 10:17:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 349306 00:22:38.536 10:17:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:38.536 10:17:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:38.536 10:17:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 349306' 00:22:38.536 killing process with pid 349306 00:22:38.536 10:17:51 -- common/autotest_common.sh@945 -- # kill 349306 00:22:38.536 Received shutdown signal, test time was about 10.000000 seconds 00:22:38.536 00:22:38.536 Latency(us) 00:22:38.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.536 =================================================================================================================== 00:22:38.536 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:38.537 10:17:51 -- common/autotest_common.sh@950 -- # wait 349306 00:22:38.795 10:17:51 -- target/tls.sh@37 -- # return 1 00:22:38.795 10:17:51 -- common/autotest_common.sh@643 -- # es=1 00:22:38.795 10:17:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:38.795 10:17:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:38.795 10:17:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:38.795 10:17:51 -- target/tls.sh@167 -- # killprocess 344287 00:22:38.795 10:17:51 -- common/autotest_common.sh@926 -- # '[' -z 344287 ']' 00:22:38.795 10:17:51 -- common/autotest_common.sh@930 -- # kill -0 344287 00:22:38.795 10:17:51 -- common/autotest_common.sh@931 -- # uname 00:22:38.795 10:17:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:38.795 10:17:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 344287 00:22:38.795 10:17:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:38.795 10:17:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:38.795 10:17:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 344287' 00:22:38.795 killing process with pid 344287 00:22:38.795 10:17:51 -- common/autotest_common.sh@945 -- # kill 344287 00:22:38.795 10:17:51 -- common/autotest_common.sh@950 -- # wait 344287 00:22:39.053 10:17:52 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:22:39.053 10:17:52 -- target/tls.sh@49 -- # local key hash crc 00:22:39.053 10:17:52 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:39.053 10:17:52 -- target/tls.sh@51 -- # hash=02 00:22:39.053 10:17:52 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:22:39.053 10:17:52 -- target/tls.sh@52 -- # gzip -1 -c 00:22:39.053 10:17:52 -- target/tls.sh@52 -- # head -c 4 00:22:39.053 10:17:52 -- target/tls.sh@52 -- # tail -c8 00:22:39.053 10:17:52 -- target/tls.sh@52 -- # crc='�e�'\''' 00:22:39.053 10:17:52 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:39.053 10:17:52 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:22:39.053 10:17:52 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:39.053 10:17:52 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:39.053 10:17:52 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:39.053 10:17:52 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:39.053 10:17:52 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:39.053 10:17:52 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:22:39.053 10:17:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:39.053 10:17:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:39.053 10:17:52 -- common/autotest_common.sh@10 -- # set +x 00:22:39.053 10:17:52 -- nvmf/common.sh@469 -- # nvmfpid=349678 00:22:39.053 10:17:52 -- nvmf/common.sh@470 -- # waitforlisten 349678 00:22:39.053 10:17:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:39.053 10:17:52 -- common/autotest_common.sh@819 -- # '[' -z 349678 ']' 00:22:39.053 10:17:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.053 10:17:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:39.053 10:17:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.053 10:17:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:39.053 10:17:52 -- common/autotest_common.sh@10 -- # set +x 00:22:39.054 [2024-04-24 10:17:52.241903] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:39.054 [2024-04-24 10:17:52.241949] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.054 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.054 [2024-04-24 10:17:52.298586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.311 [2024-04-24 10:17:52.368777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:39.311 [2024-04-24 10:17:52.368889] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.311 [2024-04-24 10:17:52.368896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.311 [2024-04-24 10:17:52.368903] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.311 [2024-04-24 10:17:52.368922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.885 10:17:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:39.885 10:17:53 -- common/autotest_common.sh@852 -- # return 0 00:22:39.885 10:17:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:39.885 10:17:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:39.885 10:17:53 -- common/autotest_common.sh@10 -- # set +x 00:22:39.885 10:17:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.885 10:17:53 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:39.885 10:17:53 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:39.885 10:17:53 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:40.142 [2024-04-24 10:17:53.219928] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.142 10:17:53 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:40.142 10:17:53 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:40.400 [2024-04-24 10:17:53.568827] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.400 [2024-04-24 10:17:53.569002] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.400 10:17:53 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:40.659 malloc0 00:22:40.659 10:17:53 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:40.659 10:17:53 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:40.918 10:17:54 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:40.918 10:17:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:40.918 10:17:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:40.918 10:17:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:40.918 10:17:54 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:40.918 10:17:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:40.918 10:17:54 -- target/tls.sh@28 -- # bdevperf_pid=350003 00:22:40.918 10:17:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:40.918 10:17:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:40.918 10:17:54 -- target/tls.sh@31 -- # waitforlisten 350003 /var/tmp/bdevperf.sock 00:22:40.918 10:17:54 -- common/autotest_common.sh@819 -- # '[' -z 350003 ']' 00:22:40.918 10:17:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.918 10:17:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:40.918 10:17:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.918 10:17:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:40.918 10:17:54 -- common/autotest_common.sh@10 -- # set +x 00:22:40.918 [2024-04-24 10:17:54.126955] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:40.918 [2024-04-24 10:17:54.127001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350003 ] 00:22:40.918 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.918 [2024-04-24 10:17:54.176461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.176 [2024-04-24 10:17:54.250786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.769 10:17:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:41.769 10:17:54 -- common/autotest_common.sh@852 -- # return 0 00:22:41.769 10:17:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:42.027 [2024-04-24 10:17:55.085531] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.027 TLSTESTn1 00:22:42.027 10:17:55 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:42.027 Running I/O for 10 seconds... 00:22:54.215 00:22:54.215 Latency(us) 00:22:54.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.215 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.215 Verification LBA range: start 0x0 length 0x2000 00:22:54.215 TLSTESTn1 : 10.01 4128.66 16.13 0.00 0.00 30973.72 3604.48 59267.34 00:22:54.215 =================================================================================================================== 00:22:54.215 Total : 4128.66 16.13 0.00 0.00 30973.72 3604.48 59267.34 00:22:54.215 0 00:22:54.215 10:18:05 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.215 10:18:05 -- target/tls.sh@45 -- # killprocess 350003 00:22:54.215 10:18:05 -- common/autotest_common.sh@926 -- # '[' -z 350003 ']' 00:22:54.215 10:18:05 -- common/autotest_common.sh@930 -- # kill -0 350003 00:22:54.215 10:18:05 -- common/autotest_common.sh@931 -- # uname 00:22:54.215 10:18:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:54.215 10:18:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 350003 00:22:54.215 10:18:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:54.215 10:18:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:54.215 10:18:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 350003' 00:22:54.215 killing process with pid 350003 00:22:54.216 10:18:05 -- common/autotest_common.sh@945 -- # kill 350003 00:22:54.216 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.216 00:22:54.216 Latency(us) 00:22:54.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.216 =================================================================================================================== 00:22:54.216 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.216 10:18:05 -- common/autotest_common.sh@950 -- # wait 350003 00:22:54.216 10:18:05 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:54.216 10:18:05 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:54.216 10:18:05 -- common/autotest_common.sh@640 -- # local es=0 00:22:54.216 10:18:05 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:54.216 10:18:05 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:54.216 10:18:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:54.216 10:18:05 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:54.216 10:18:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:54.216 10:18:05 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:54.216 10:18:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.216 10:18:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:54.216 10:18:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.216 10:18:05 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:54.216 10:18:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.216 10:18:05 -- target/tls.sh@28 -- # bdevperf_pid=352001 00:22:54.216 10:18:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.216 10:18:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.216 10:18:05 -- target/tls.sh@31 -- # waitforlisten 352001 /var/tmp/bdevperf.sock 00:22:54.216 10:18:05 -- common/autotest_common.sh@819 -- # '[' -z 352001 ']' 00:22:54.216 10:18:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.216 10:18:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:54.216 10:18:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.216 10:18:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:54.216 10:18:05 -- common/autotest_common.sh@10 -- # set +x 00:22:54.216 [2024-04-24 10:18:05.621927] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:54.216 [2024-04-24 10:18:05.621974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352001 ] 00:22:54.216 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.216 [2024-04-24 10:18:05.672083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.216 [2024-04-24 10:18:05.737729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.216 10:18:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:54.216 10:18:06 -- common/autotest_common.sh@852 -- # return 0 00:22:54.216 10:18:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:54.216 [2024-04-24 10:18:06.576207] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.216 [2024-04-24 10:18:06.576251] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:54.216 request: 00:22:54.216 { 00:22:54.216 "name": "TLSTEST", 00:22:54.216 "trtype": "tcp", 00:22:54.216 "traddr": "10.0.0.2", 00:22:54.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.216 "adrfam": "ipv4", 00:22:54.216 "trsvcid": "4420", 00:22:54.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.216 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:54.216 "method": "bdev_nvme_attach_controller", 00:22:54.216 "req_id": 1 00:22:54.216 } 00:22:54.216 Got JSON-RPC error response 00:22:54.216 response: 00:22:54.216 { 00:22:54.216 "code": -22, 00:22:54.216 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:54.216 } 00:22:54.216 10:18:06 -- target/tls.sh@36 -- # killprocess 352001 00:22:54.216 10:18:06 -- common/autotest_common.sh@926 -- # '[' -z 352001 ']' 00:22:54.216 10:18:06 -- common/autotest_common.sh@930 -- # kill -0 352001 00:22:54.216 10:18:06 -- common/autotest_common.sh@931 -- # uname 00:22:54.216 10:18:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:54.216 10:18:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 352001 00:22:54.216 10:18:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:54.216 10:18:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:54.216 10:18:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 352001' 00:22:54.216 killing process with pid 352001 00:22:54.216 10:18:06 -- common/autotest_common.sh@945 -- # kill 352001 00:22:54.216 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.216 00:22:54.216 Latency(us) 00:22:54.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.216 =================================================================================================================== 00:22:54.216 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.216 10:18:06 -- common/autotest_common.sh@950 -- # wait 352001 00:22:54.216 10:18:06 -- target/tls.sh@37 -- # return 1 00:22:54.216 10:18:06 -- common/autotest_common.sh@643 -- # es=1 00:22:54.216 10:18:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:54.216 10:18:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:54.216 10:18:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:54.216 10:18:06 -- target/tls.sh@183 -- # killprocess 349678 00:22:54.216 10:18:06 -- common/autotest_common.sh@926 -- # '[' -z 349678 ']' 00:22:54.216 10:18:06 -- common/autotest_common.sh@930 -- # kill -0 349678 00:22:54.216 10:18:06 -- common/autotest_common.sh@931 -- # uname 00:22:54.216 10:18:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:54.216 10:18:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 349678 00:22:54.216 10:18:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:54.216 10:18:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:54.216 10:18:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 349678' 00:22:54.216 killing process with pid 349678 00:22:54.216 10:18:06 -- common/autotest_common.sh@945 -- # kill 349678 00:22:54.216 10:18:06 -- common/autotest_common.sh@950 -- # wait 349678 00:22:54.216 10:18:07 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:54.216 10:18:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:54.216 10:18:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:54.216 10:18:07 -- common/autotest_common.sh@10 -- # set +x 00:22:54.216 10:18:07 -- nvmf/common.sh@469 -- # nvmfpid=352250 00:22:54.216 10:18:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:54.216 10:18:07 -- nvmf/common.sh@470 -- # waitforlisten 352250 00:22:54.216 10:18:07 -- common/autotest_common.sh@819 -- # '[' -z 352250 ']' 00:22:54.216 10:18:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.216 10:18:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:54.216 10:18:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.216 10:18:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:54.216 10:18:07 -- common/autotest_common.sh@10 -- # set +x 00:22:54.216 [2024-04-24 10:18:07.152175] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:54.216 [2024-04-24 10:18:07.152223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.216 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.216 [2024-04-24 10:18:07.209207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.217 [2024-04-24 10:18:07.273002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:54.217 [2024-04-24 10:18:07.273123] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.217 [2024-04-24 10:18:07.273131] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.217 [2024-04-24 10:18:07.273138] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.217 [2024-04-24 10:18:07.273153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.782 10:18:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:54.782 10:18:07 -- common/autotest_common.sh@852 -- # return 0 00:22:54.782 10:18:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:54.782 10:18:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:54.782 10:18:07 -- common/autotest_common.sh@10 -- # set +x 00:22:54.782 10:18:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.782 10:18:07 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:54.782 10:18:07 -- common/autotest_common.sh@640 -- # local es=0 00:22:54.782 10:18:07 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:54.782 10:18:07 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:22:54.782 10:18:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:54.783 10:18:07 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:22:54.783 10:18:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:54.783 10:18:07 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:54.783 10:18:07 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:54.783 10:18:07 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.040 [2024-04-24 10:18:08.136230] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.041 10:18:08 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.041 10:18:08 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.298 [2024-04-24 10:18:08.473131] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.298 [2024-04-24 10:18:08.473315] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.298 10:18:08 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:55.555 malloc0 00:22:55.555 10:18:08 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:55.556 10:18:08 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:55.813 [2024-04-24 10:18:08.974611] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:55.813 [2024-04-24 10:18:08.974643] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:55.813 [2024-04-24 10:18:08.974658] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:55.813 request: 00:22:55.813 { 00:22:55.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.813 "host": "nqn.2016-06.io.spdk:host1", 00:22:55.813 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:55.813 "method": "nvmf_subsystem_add_host", 00:22:55.813 "req_id": 1 00:22:55.813 } 00:22:55.813 Got JSON-RPC error response 00:22:55.813 response: 00:22:55.813 { 00:22:55.813 "code": -32603, 00:22:55.813 "message": "Internal error" 00:22:55.813 } 00:22:55.813 10:18:08 -- common/autotest_common.sh@643 -- # es=1 00:22:55.813 10:18:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:55.813 10:18:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:55.813 10:18:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:55.813 10:18:08 -- target/tls.sh@189 -- # killprocess 352250 00:22:55.814 10:18:08 -- common/autotest_common.sh@926 -- # '[' -z 352250 ']' 00:22:55.814 10:18:08 -- common/autotest_common.sh@930 -- # kill -0 352250 00:22:55.814 10:18:08 -- common/autotest_common.sh@931 -- # uname 00:22:55.814 10:18:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:55.814 10:18:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 352250 00:22:55.814 10:18:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:55.814 10:18:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:55.814 10:18:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 352250' 00:22:55.814 killing process with pid 352250 00:22:55.814 10:18:09 -- common/autotest_common.sh@945 -- # kill 352250 00:22:55.814 10:18:09 -- common/autotest_common.sh@950 -- # wait 352250 00:22:56.072 10:18:09 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:56.072 10:18:09 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:22:56.072 10:18:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:56.072 10:18:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:56.072 10:18:09 -- common/autotest_common.sh@10 -- # set +x 00:22:56.072 10:18:09 -- nvmf/common.sh@469 -- # nvmfpid=353016 00:22:56.072 10:18:09 -- nvmf/common.sh@470 -- # waitforlisten 353016 00:22:56.072 10:18:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:56.072 10:18:09 -- common/autotest_common.sh@819 -- # '[' -z 353016 ']' 00:22:56.072 10:18:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.072 10:18:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:56.072 10:18:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.072 10:18:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:56.072 10:18:09 -- common/autotest_common.sh@10 -- # set +x 00:22:56.072 [2024-04-24 10:18:09.300401] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:56.072 [2024-04-24 10:18:09.300447] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.072 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.329 [2024-04-24 10:18:09.357173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.329 [2024-04-24 10:18:09.434307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:56.329 [2024-04-24 10:18:09.434413] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.329 [2024-04-24 10:18:09.434421] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.329 [2024-04-24 10:18:09.434427] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.329 [2024-04-24 10:18:09.434443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.894 10:18:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:56.894 10:18:10 -- common/autotest_common.sh@852 -- # return 0 00:22:56.894 10:18:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:56.894 10:18:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:56.894 10:18:10 -- common/autotest_common.sh@10 -- # set +x 00:22:56.894 10:18:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.894 10:18:10 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:56.894 10:18:10 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:56.894 10:18:10 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:57.151 [2024-04-24 10:18:10.309965] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.151 10:18:10 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:57.409 10:18:10 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:57.409 [2024-04-24 10:18:10.646827] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:57.409 [2024-04-24 10:18:10.646998] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.409 10:18:10 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:57.666 malloc0 00:22:57.666 10:18:10 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:57.924 10:18:10 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:57.924 10:18:11 -- target/tls.sh@197 -- # bdevperf_pid=353398 00:22:57.924 10:18:11 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.924 10:18:11 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.924 10:18:11 -- target/tls.sh@200 -- # waitforlisten 353398 /var/tmp/bdevperf.sock 00:22:57.924 10:18:11 -- common/autotest_common.sh@819 -- # '[' -z 353398 ']' 00:22:57.924 10:18:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.924 10:18:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:57.924 10:18:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.924 10:18:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:57.924 10:18:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.924 [2024-04-24 10:18:11.199989] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:57.924 [2024-04-24 10:18:11.200032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353398 ] 00:22:58.181 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.181 [2024-04-24 10:18:11.250284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.181 [2024-04-24 10:18:11.323928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.752 10:18:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:58.752 10:18:11 -- common/autotest_common.sh@852 -- # return 0 00:22:58.752 10:18:11 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:59.011 [2024-04-24 10:18:12.134806] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.011 TLSTESTn1 00:22:59.011 10:18:12 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:59.282 10:18:12 -- target/tls.sh@205 -- # tgtconf='{ 00:22:59.282 "subsystems": [ 00:22:59.282 { 00:22:59.282 "subsystem": "iobuf", 00:22:59.282 "config": [ 00:22:59.282 { 00:22:59.282 "method": "iobuf_set_options", 00:22:59.282 "params": { 00:22:59.282 "small_pool_count": 8192, 00:22:59.282 "large_pool_count": 1024, 00:22:59.282 "small_bufsize": 8192, 00:22:59.282 "large_bufsize": 135168 00:22:59.282 } 00:22:59.282 } 00:22:59.282 ] 00:22:59.282 }, 00:22:59.282 { 00:22:59.282 "subsystem": "sock", 00:22:59.282 "config": [ 00:22:59.282 { 00:22:59.282 "method": "sock_impl_set_options", 00:22:59.282 "params": { 00:22:59.282 "impl_name": "posix", 00:22:59.282 "recv_buf_size": 2097152, 00:22:59.282 "send_buf_size": 2097152, 00:22:59.282 "enable_recv_pipe": true, 00:22:59.282 "enable_quickack": false, 00:22:59.282 "enable_placement_id": 0, 00:22:59.282 "enable_zerocopy_send_server": true, 00:22:59.282 "enable_zerocopy_send_client": false, 00:22:59.282 "zerocopy_threshold": 0, 00:22:59.282 "tls_version": 0, 00:22:59.282 "enable_ktls": false 00:22:59.282 } 00:22:59.282 }, 00:22:59.282 { 00:22:59.282 "method": "sock_impl_set_options", 00:22:59.282 "params": { 00:22:59.282 "impl_name": "ssl", 00:22:59.282 "recv_buf_size": 4096, 00:22:59.282 "send_buf_size": 4096, 00:22:59.282 "enable_recv_pipe": true, 00:22:59.282 "enable_quickack": false, 00:22:59.282 "enable_placement_id": 0, 00:22:59.282 "enable_zerocopy_send_server": true, 00:22:59.282 "enable_zerocopy_send_client": false, 00:22:59.282 "zerocopy_threshold": 0, 00:22:59.282 "tls_version": 0, 00:22:59.282 "enable_ktls": false 00:22:59.282 } 00:22:59.282 } 00:22:59.282 ] 00:22:59.282 }, 00:22:59.282 { 00:22:59.282 "subsystem": "vmd", 00:22:59.282 "config": [] 00:22:59.282 }, 00:22:59.282 { 00:22:59.282 "subsystem": "accel", 00:22:59.282 "config": [ 00:22:59.282 { 00:22:59.282 "method": "accel_set_options", 00:22:59.282 "params": { 00:22:59.282 "small_cache_size": 128, 00:22:59.282 "large_cache_size": 16, 00:22:59.282 "task_count": 2048, 00:22:59.282 "sequence_count": 2048, 00:22:59.282 "buf_count": 2048 00:22:59.282 } 00:22:59.282 } 00:22:59.282 ] 00:22:59.282 }, 00:22:59.282 { 00:22:59.282 "subsystem": "bdev", 00:22:59.282 "config": [ 00:22:59.282 { 00:22:59.282 "method": "bdev_set_options", 00:22:59.282 "params": { 00:22:59.282 "bdev_io_pool_size": 65535, 00:22:59.282 "bdev_io_cache_size": 256, 00:22:59.282 "bdev_auto_examine": true, 00:22:59.282 "iobuf_small_cache_size": 128, 00:22:59.282 "iobuf_large_cache_size": 16 00:22:59.282 } 00:22:59.282 }, 00:22:59.282 { 00:22:59.282 "method": "bdev_raid_set_options", 00:22:59.282 "params": { 00:22:59.282 "process_window_size_kb": 1024 00:22:59.282 } 00:22:59.282 }, 00:22:59.282 { 00:22:59.282 "method": "bdev_iscsi_set_options", 00:22:59.282 "params": { 00:22:59.282 "timeout_sec": 30 00:22:59.282 } 00:22:59.282 }, 00:22:59.282 { 00:22:59.282 "method": "bdev_nvme_set_options", 00:22:59.282 "params": { 00:22:59.282 "action_on_timeout": "none", 00:22:59.282 "timeout_us": 0, 00:22:59.282 "timeout_admin_us": 0, 00:22:59.282 "keep_alive_timeout_ms": 10000, 00:22:59.282 "transport_retry_count": 4, 00:22:59.282 "arbitration_burst": 0, 00:22:59.282 "low_priority_weight": 0, 00:22:59.282 "medium_priority_weight": 0, 00:22:59.282 "high_priority_weight": 0, 00:22:59.282 "nvme_adminq_poll_period_us": 10000, 00:22:59.282 "nvme_ioq_poll_period_us": 0, 00:22:59.282 "io_queue_requests": 0, 00:22:59.282 "delay_cmd_submit": true, 00:22:59.282 "bdev_retry_count": 3, 00:22:59.282 "transport_ack_timeout": 0, 00:22:59.282 "ctrlr_loss_timeout_sec": 0, 00:22:59.282 "reconnect_delay_sec": 0, 00:22:59.282 "fast_io_fail_timeout_sec": 0, 00:22:59.282 "generate_uuids": false, 00:22:59.282 "transport_tos": 0, 00:22:59.282 "io_path_stat": false, 00:22:59.282 "allow_accel_sequence": false 00:22:59.282 } 00:22:59.282 }, 00:22:59.282 { 00:22:59.282 "method": "bdev_nvme_set_hotplug", 00:22:59.282 "params": { 00:22:59.282 "period_us": 100000, 00:22:59.282 "enable": false 00:22:59.282 } 00:22:59.282 }, 00:22:59.283 { 00:22:59.283 "method": "bdev_malloc_create", 00:22:59.283 "params": { 00:22:59.283 "name": "malloc0", 00:22:59.283 "num_blocks": 8192, 00:22:59.283 "block_size": 4096, 00:22:59.283 "physical_block_size": 4096, 00:22:59.283 "uuid": "08f6614a-5256-4481-9ead-11a7ba3e0532", 00:22:59.283 "optimal_io_boundary": 0 00:22:59.283 } 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "method": "bdev_wait_for_examine" 00:22:59.283 } 00:22:59.283 ] 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "subsystem": "nbd", 00:22:59.283 "config": [] 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "subsystem": "scheduler", 00:22:59.283 "config": [ 00:22:59.283 { 00:22:59.283 "method": "framework_set_scheduler", 00:22:59.283 "params": { 00:22:59.283 "name": "static" 00:22:59.283 } 00:22:59.283 } 00:22:59.283 ] 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "subsystem": "nvmf", 00:22:59.283 "config": [ 00:22:59.283 { 00:22:59.283 "method": "nvmf_set_config", 00:22:59.283 "params": { 00:22:59.283 "discovery_filter": "match_any", 00:22:59.283 "admin_cmd_passthru": { 00:22:59.283 "identify_ctrlr": false 00:22:59.283 } 00:22:59.283 } 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "method": "nvmf_set_max_subsystems", 00:22:59.283 "params": { 00:22:59.283 "max_subsystems": 1024 00:22:59.283 } 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "method": "nvmf_set_crdt", 00:22:59.283 "params": { 00:22:59.283 "crdt1": 0, 00:22:59.283 "crdt2": 0, 00:22:59.283 "crdt3": 0 00:22:59.283 } 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "method": "nvmf_create_transport", 00:22:59.283 "params": { 00:22:59.283 "trtype": "TCP", 00:22:59.283 "max_queue_depth": 128, 00:22:59.283 "max_io_qpairs_per_ctrlr": 127, 00:22:59.283 "in_capsule_data_size": 4096, 00:22:59.283 "max_io_size": 131072, 00:22:59.283 "io_unit_size": 131072, 00:22:59.283 "max_aq_depth": 128, 00:22:59.283 "num_shared_buffers": 511, 00:22:59.283 "buf_cache_size": 4294967295, 00:22:59.283 "dif_insert_or_strip": false, 00:22:59.283 "zcopy": false, 00:22:59.283 "c2h_success": false, 00:22:59.283 "sock_priority": 0, 00:22:59.283 "abort_timeout_sec": 1 00:22:59.283 } 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "method": "nvmf_create_subsystem", 00:22:59.283 "params": { 00:22:59.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.283 "allow_any_host": false, 00:22:59.283 "serial_number": "SPDK00000000000001", 00:22:59.283 "model_number": "SPDK bdev Controller", 00:22:59.283 "max_namespaces": 10, 00:22:59.283 "min_cntlid": 1, 00:22:59.283 "max_cntlid": 65519, 00:22:59.283 "ana_reporting": false 00:22:59.283 } 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "method": "nvmf_subsystem_add_host", 00:22:59.283 "params": { 00:22:59.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.283 "host": "nqn.2016-06.io.spdk:host1", 00:22:59.283 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:59.283 } 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "method": "nvmf_subsystem_add_ns", 00:22:59.283 "params": { 00:22:59.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.283 "namespace": { 00:22:59.283 "nsid": 1, 00:22:59.283 "bdev_name": "malloc0", 00:22:59.283 "nguid": "08F6614A525644819EAD11A7BA3E0532", 00:22:59.283 "uuid": "08f6614a-5256-4481-9ead-11a7ba3e0532" 00:22:59.283 } 00:22:59.283 } 00:22:59.283 }, 00:22:59.283 { 00:22:59.283 "method": "nvmf_subsystem_add_listener", 00:22:59.283 "params": { 00:22:59.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.283 "listen_address": { 00:22:59.283 "trtype": "TCP", 00:22:59.283 "adrfam": "IPv4", 00:22:59.283 "traddr": "10.0.0.2", 00:22:59.283 "trsvcid": "4420" 00:22:59.283 }, 00:22:59.283 "secure_channel": true 00:22:59.283 } 00:22:59.283 } 00:22:59.283 ] 00:22:59.283 } 00:22:59.283 ] 00:22:59.283 }' 00:22:59.283 10:18:12 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:59.541 10:18:12 -- target/tls.sh@206 -- # bdevperfconf='{ 00:22:59.541 "subsystems": [ 00:22:59.541 { 00:22:59.541 "subsystem": "iobuf", 00:22:59.541 "config": [ 00:22:59.541 { 00:22:59.541 "method": "iobuf_set_options", 00:22:59.541 "params": { 00:22:59.541 "small_pool_count": 8192, 00:22:59.541 "large_pool_count": 1024, 00:22:59.541 "small_bufsize": 8192, 00:22:59.541 "large_bufsize": 135168 00:22:59.541 } 00:22:59.541 } 00:22:59.541 ] 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "subsystem": "sock", 00:22:59.541 "config": [ 00:22:59.541 { 00:22:59.541 "method": "sock_impl_set_options", 00:22:59.541 "params": { 00:22:59.541 "impl_name": "posix", 00:22:59.541 "recv_buf_size": 2097152, 00:22:59.541 "send_buf_size": 2097152, 00:22:59.541 "enable_recv_pipe": true, 00:22:59.541 "enable_quickack": false, 00:22:59.541 "enable_placement_id": 0, 00:22:59.541 "enable_zerocopy_send_server": true, 00:22:59.541 "enable_zerocopy_send_client": false, 00:22:59.541 "zerocopy_threshold": 0, 00:22:59.541 "tls_version": 0, 00:22:59.541 "enable_ktls": false 00:22:59.541 } 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "method": "sock_impl_set_options", 00:22:59.541 "params": { 00:22:59.541 "impl_name": "ssl", 00:22:59.541 "recv_buf_size": 4096, 00:22:59.541 "send_buf_size": 4096, 00:22:59.541 "enable_recv_pipe": true, 00:22:59.541 "enable_quickack": false, 00:22:59.541 "enable_placement_id": 0, 00:22:59.541 "enable_zerocopy_send_server": true, 00:22:59.541 "enable_zerocopy_send_client": false, 00:22:59.541 "zerocopy_threshold": 0, 00:22:59.541 "tls_version": 0, 00:22:59.541 "enable_ktls": false 00:22:59.541 } 00:22:59.541 } 00:22:59.541 ] 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "subsystem": "vmd", 00:22:59.541 "config": [] 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "subsystem": "accel", 00:22:59.541 "config": [ 00:22:59.541 { 00:22:59.541 "method": "accel_set_options", 00:22:59.541 "params": { 00:22:59.541 "small_cache_size": 128, 00:22:59.541 "large_cache_size": 16, 00:22:59.541 "task_count": 2048, 00:22:59.541 "sequence_count": 2048, 00:22:59.541 "buf_count": 2048 00:22:59.541 } 00:22:59.541 } 00:22:59.541 ] 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "subsystem": "bdev", 00:22:59.541 "config": [ 00:22:59.541 { 00:22:59.541 "method": "bdev_set_options", 00:22:59.541 "params": { 00:22:59.541 "bdev_io_pool_size": 65535, 00:22:59.541 "bdev_io_cache_size": 256, 00:22:59.541 "bdev_auto_examine": true, 00:22:59.541 "iobuf_small_cache_size": 128, 00:22:59.541 "iobuf_large_cache_size": 16 00:22:59.541 } 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "method": "bdev_raid_set_options", 00:22:59.541 "params": { 00:22:59.541 "process_window_size_kb": 1024 00:22:59.541 } 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "method": "bdev_iscsi_set_options", 00:22:59.541 "params": { 00:22:59.541 "timeout_sec": 30 00:22:59.541 } 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "method": "bdev_nvme_set_options", 00:22:59.541 "params": { 00:22:59.541 "action_on_timeout": "none", 00:22:59.541 "timeout_us": 0, 00:22:59.541 "timeout_admin_us": 0, 00:22:59.541 "keep_alive_timeout_ms": 10000, 00:22:59.541 "transport_retry_count": 4, 00:22:59.541 "arbitration_burst": 0, 00:22:59.541 "low_priority_weight": 0, 00:22:59.541 "medium_priority_weight": 0, 00:22:59.541 "high_priority_weight": 0, 00:22:59.541 "nvme_adminq_poll_period_us": 10000, 00:22:59.541 "nvme_ioq_poll_period_us": 0, 00:22:59.541 "io_queue_requests": 512, 00:22:59.541 "delay_cmd_submit": true, 00:22:59.541 "bdev_retry_count": 3, 00:22:59.541 "transport_ack_timeout": 0, 00:22:59.541 "ctrlr_loss_timeout_sec": 0, 00:22:59.541 "reconnect_delay_sec": 0, 00:22:59.541 "fast_io_fail_timeout_sec": 0, 00:22:59.541 "generate_uuids": false, 00:22:59.541 "transport_tos": 0, 00:22:59.541 "io_path_stat": false, 00:22:59.541 "allow_accel_sequence": false 00:22:59.541 } 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "method": "bdev_nvme_attach_controller", 00:22:59.541 "params": { 00:22:59.541 "name": "TLSTEST", 00:22:59.541 "trtype": "TCP", 00:22:59.541 "adrfam": "IPv4", 00:22:59.541 "traddr": "10.0.0.2", 00:22:59.541 "trsvcid": "4420", 00:22:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.541 "prchk_reftag": false, 00:22:59.541 "prchk_guard": false, 00:22:59.541 "ctrlr_loss_timeout_sec": 0, 00:22:59.541 "reconnect_delay_sec": 0, 00:22:59.541 "fast_io_fail_timeout_sec": 0, 00:22:59.541 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:59.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.541 "hdgst": false, 00:22:59.541 "ddgst": false 00:22:59.541 } 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "method": "bdev_nvme_set_hotplug", 00:22:59.541 "params": { 00:22:59.541 "period_us": 100000, 00:22:59.541 "enable": false 00:22:59.541 } 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "method": "bdev_wait_for_examine" 00:22:59.541 } 00:22:59.541 ] 00:22:59.541 }, 00:22:59.541 { 00:22:59.541 "subsystem": "nbd", 00:22:59.541 "config": [] 00:22:59.541 } 00:22:59.541 ] 00:22:59.541 }' 00:22:59.541 10:18:12 -- target/tls.sh@208 -- # killprocess 353398 00:22:59.541 10:18:12 -- common/autotest_common.sh@926 -- # '[' -z 353398 ']' 00:22:59.541 10:18:12 -- common/autotest_common.sh@930 -- # kill -0 353398 00:22:59.541 10:18:12 -- common/autotest_common.sh@931 -- # uname 00:22:59.541 10:18:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:59.541 10:18:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 353398 00:22:59.541 10:18:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:59.541 10:18:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:59.541 10:18:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 353398' 00:22:59.541 killing process with pid 353398 00:22:59.542 10:18:12 -- common/autotest_common.sh@945 -- # kill 353398 00:22:59.542 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.542 00:22:59.542 Latency(us) 00:22:59.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.542 =================================================================================================================== 00:22:59.542 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.542 10:18:12 -- common/autotest_common.sh@950 -- # wait 353398 00:22:59.798 10:18:12 -- target/tls.sh@209 -- # killprocess 353016 00:22:59.798 10:18:12 -- common/autotest_common.sh@926 -- # '[' -z 353016 ']' 00:22:59.798 10:18:12 -- common/autotest_common.sh@930 -- # kill -0 353016 00:22:59.798 10:18:12 -- common/autotest_common.sh@931 -- # uname 00:22:59.798 10:18:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:59.798 10:18:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 353016 00:22:59.798 10:18:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:59.798 10:18:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:59.798 10:18:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 353016' 00:22:59.798 killing process with pid 353016 00:22:59.798 10:18:12 -- common/autotest_common.sh@945 -- # kill 353016 00:22:59.798 10:18:12 -- common/autotest_common.sh@950 -- # wait 353016 00:23:00.055 10:18:13 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:00.055 10:18:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:00.055 10:18:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:00.055 10:18:13 -- target/tls.sh@212 -- # echo '{ 00:23:00.055 "subsystems": [ 00:23:00.055 { 00:23:00.055 "subsystem": "iobuf", 00:23:00.055 "config": [ 00:23:00.055 { 00:23:00.055 "method": "iobuf_set_options", 00:23:00.055 "params": { 00:23:00.055 "small_pool_count": 8192, 00:23:00.055 "large_pool_count": 1024, 00:23:00.055 "small_bufsize": 8192, 00:23:00.055 "large_bufsize": 135168 00:23:00.055 } 00:23:00.055 } 00:23:00.055 ] 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "subsystem": "sock", 00:23:00.055 "config": [ 00:23:00.055 { 00:23:00.055 "method": "sock_impl_set_options", 00:23:00.055 "params": { 00:23:00.055 "impl_name": "posix", 00:23:00.055 "recv_buf_size": 2097152, 00:23:00.055 "send_buf_size": 2097152, 00:23:00.055 "enable_recv_pipe": true, 00:23:00.055 "enable_quickack": false, 00:23:00.055 "enable_placement_id": 0, 00:23:00.055 "enable_zerocopy_send_server": true, 00:23:00.055 "enable_zerocopy_send_client": false, 00:23:00.055 "zerocopy_threshold": 0, 00:23:00.055 "tls_version": 0, 00:23:00.055 "enable_ktls": false 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "sock_impl_set_options", 00:23:00.055 "params": { 00:23:00.055 "impl_name": "ssl", 00:23:00.055 "recv_buf_size": 4096, 00:23:00.055 "send_buf_size": 4096, 00:23:00.055 "enable_recv_pipe": true, 00:23:00.055 "enable_quickack": false, 00:23:00.055 "enable_placement_id": 0, 00:23:00.055 "enable_zerocopy_send_server": true, 00:23:00.055 "enable_zerocopy_send_client": false, 00:23:00.055 "zerocopy_threshold": 0, 00:23:00.055 "tls_version": 0, 00:23:00.055 "enable_ktls": false 00:23:00.055 } 00:23:00.055 } 00:23:00.055 ] 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "subsystem": "vmd", 00:23:00.055 "config": [] 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "subsystem": "accel", 00:23:00.055 "config": [ 00:23:00.055 { 00:23:00.055 "method": "accel_set_options", 00:23:00.055 "params": { 00:23:00.055 "small_cache_size": 128, 00:23:00.055 "large_cache_size": 16, 00:23:00.055 "task_count": 2048, 00:23:00.055 "sequence_count": 2048, 00:23:00.055 "buf_count": 2048 00:23:00.055 } 00:23:00.055 } 00:23:00.055 ] 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "subsystem": "bdev", 00:23:00.055 "config": [ 00:23:00.055 { 00:23:00.055 "method": "bdev_set_options", 00:23:00.055 "params": { 00:23:00.055 "bdev_io_pool_size": 65535, 00:23:00.055 "bdev_io_cache_size": 256, 00:23:00.055 "bdev_auto_examine": true, 00:23:00.055 "iobuf_small_cache_size": 128, 00:23:00.055 "iobuf_large_cache_size": 16 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "bdev_raid_set_options", 00:23:00.055 "params": { 00:23:00.055 "process_window_size_kb": 1024 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "bdev_iscsi_set_options", 00:23:00.055 "params": { 00:23:00.055 "timeout_sec": 30 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "bdev_nvme_set_options", 00:23:00.055 "params": { 00:23:00.055 "action_on_timeout": "none", 00:23:00.055 "timeout_us": 0, 00:23:00.055 "timeout_admin_us": 0, 00:23:00.055 "keep_alive_timeout_ms": 10000, 00:23:00.055 "transport_retry_count": 4, 00:23:00.055 "arbitration_burst": 0, 00:23:00.055 "low_priority_weight": 0, 00:23:00.055 "medium_priority_weight": 0, 00:23:00.055 "high_priority_weight": 0, 00:23:00.055 "nvme_adminq_poll_period_us": 10000, 00:23:00.055 "nvme_ioq_poll_period_us": 0, 00:23:00.055 "io_queue_requests": 0, 00:23:00.055 "delay_cmd_submit": true, 00:23:00.055 "bdev_retry_count": 3, 00:23:00.055 "transport_ack_timeout": 0, 00:23:00.055 "ctrlr_loss_timeout_sec": 0, 00:23:00.055 "reconnect_delay_sec": 0, 00:23:00.055 "fast_io_fail_timeout_sec": 0, 00:23:00.055 "generate_uuids": false, 00:23:00.055 "transport_tos": 0, 00:23:00.055 "io_path_stat": false, 00:23:00.055 "allow_accel_sequence": false 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "bdev_nvme_set_hotplug", 00:23:00.055 "params": { 00:23:00.055 "period_us": 100000, 00:23:00.055 "enable": false 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "bdev_malloc_create", 00:23:00.055 "params": { 00:23:00.055 "name": "malloc0", 00:23:00.055 "num_blocks": 8192, 00:23:00.055 "block_size": 4096, 00:23:00.055 "physical_block_size": 4096, 00:23:00.055 "uuid": "08f6614a-5256-4481-9ead-11a7ba3e0532", 00:23:00.055 "optimal_io_boundary": 0 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "bdev_wait_for_examine" 00:23:00.055 } 00:23:00.055 ] 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "subsystem": "nbd", 00:23:00.055 "config": [] 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "subsystem": "scheduler", 00:23:00.055 "config": [ 00:23:00.055 { 00:23:00.055 "method": "framework_set_scheduler", 00:23:00.055 "params": { 00:23:00.055 "name": "static" 00:23:00.055 } 00:23:00.055 } 00:23:00.055 ] 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "subsystem": "nvmf", 00:23:00.055 "config": [ 00:23:00.055 { 00:23:00.055 "method": "nvmf_set_config", 00:23:00.055 "params": { 00:23:00.055 "discovery_filter": "match_any", 00:23:00.055 "admin_cmd_passthru": { 00:23:00.055 "identify_ctrlr": false 00:23:00.055 } 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "nvmf_set_max_subsystems", 00:23:00.055 "params": { 00:23:00.055 "max_subsystems": 1024 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "nvmf_set_crdt", 00:23:00.055 "params": { 00:23:00.055 "crdt1": 0, 00:23:00.055 "crdt2": 0, 00:23:00.055 "crdt3": 0 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "nvmf_create_transport", 00:23:00.055 "params": { 00:23:00.055 "trtype": "TCP", 00:23:00.055 "max_queue_depth": 128, 00:23:00.055 "max_io_qpairs_per_ctrlr": 127, 00:23:00.055 "in_capsule_data_size": 4096, 00:23:00.055 "max_io_size": 131072, 00:23:00.055 "io_unit_size": 131072, 00:23:00.055 "max_aq_depth": 128, 00:23:00.055 "num_shared_buffers": 511, 00:23:00.055 "buf_cache_size": 4294967295, 00:23:00.055 "dif_insert_or_strip": false, 00:23:00.055 "zcopy": false, 00:23:00.055 "c2h_success": false, 00:23:00.055 "sock_priority": 0, 00:23:00.055 "abort_timeout_sec": 1 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "nvmf_create_subsystem", 00:23:00.055 "params": { 00:23:00.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.055 "allow_any_host": false, 00:23:00.055 "serial_number": "SPDK00000000000001", 00:23:00.055 "model_number": "SPDK bdev Controller", 00:23:00.055 "max_namespaces": 10, 00:23:00.055 "min_cntlid": 1, 00:23:00.055 "max_cntlid": 65519, 00:23:00.055 "ana_reporting": false 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "nvmf_subsystem_add_host", 00:23:00.055 "params": { 00:23:00.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.055 "host": "nqn.2016-06.io.spdk:host1", 00:23:00.055 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "nvmf_subsystem_add_ns", 00:23:00.055 "params": { 00:23:00.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.055 "namespace": { 00:23:00.055 "nsid": 1, 00:23:00.055 "bdev_name": "malloc0", 00:23:00.055 "nguid": "08F6614A525644819EAD11A7BA3E0532", 00:23:00.055 "uuid": "08f6614a-5256-4481-9ead-11a7ba3e0532" 00:23:00.055 } 00:23:00.055 } 00:23:00.055 }, 00:23:00.055 { 00:23:00.055 "method": "nvmf_subsystem_add_listener", 00:23:00.055 "params": { 00:23:00.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.055 "listen_address": { 00:23:00.055 "trtype": "TCP", 00:23:00.055 "adrfam": "IPv4", 00:23:00.055 "traddr": "10.0.0.2", 00:23:00.055 "trsvcid": "4420" 00:23:00.055 }, 00:23:00.055 "secure_channel": true 00:23:00.055 } 00:23:00.055 } 00:23:00.055 ] 00:23:00.055 } 00:23:00.055 ] 00:23:00.055 }' 00:23:00.055 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:00.055 10:18:13 -- nvmf/common.sh@469 -- # nvmfpid=353772 00:23:00.055 10:18:13 -- nvmf/common.sh@470 -- # waitforlisten 353772 00:23:00.055 10:18:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:00.055 10:18:13 -- common/autotest_common.sh@819 -- # '[' -z 353772 ']' 00:23:00.055 10:18:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.055 10:18:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:00.055 10:18:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.055 10:18:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:00.055 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:23:00.055 [2024-04-24 10:18:13.253356] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:00.055 [2024-04-24 10:18:13.253403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.055 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.055 [2024-04-24 10:18:13.310428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.313 [2024-04-24 10:18:13.386695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:00.313 [2024-04-24 10:18:13.386806] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.313 [2024-04-24 10:18:13.386815] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.313 [2024-04-24 10:18:13.386822] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.313 [2024-04-24 10:18:13.386842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.314 [2024-04-24 10:18:13.582291] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.570 [2024-04-24 10:18:13.614314] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:00.570 [2024-04-24 10:18:13.614488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.827 10:18:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:00.827 10:18:14 -- common/autotest_common.sh@852 -- # return 0 00:23:00.827 10:18:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:00.827 10:18:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:00.827 10:18:14 -- common/autotest_common.sh@10 -- # set +x 00:23:00.827 10:18:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.827 10:18:14 -- target/tls.sh@216 -- # bdevperf_pid=353908 00:23:00.827 10:18:14 -- target/tls.sh@217 -- # waitforlisten 353908 /var/tmp/bdevperf.sock 00:23:00.827 10:18:14 -- common/autotest_common.sh@819 -- # '[' -z 353908 ']' 00:23:00.827 10:18:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.827 10:18:14 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:00.827 10:18:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:00.827 10:18:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.827 10:18:14 -- target/tls.sh@213 -- # echo '{ 00:23:00.827 "subsystems": [ 00:23:00.827 { 00:23:00.827 "subsystem": "iobuf", 00:23:00.827 "config": [ 00:23:00.827 { 00:23:00.827 "method": "iobuf_set_options", 00:23:00.827 "params": { 00:23:00.827 "small_pool_count": 8192, 00:23:00.827 "large_pool_count": 1024, 00:23:00.827 "small_bufsize": 8192, 00:23:00.827 "large_bufsize": 135168 00:23:00.827 } 00:23:00.827 } 00:23:00.827 ] 00:23:00.827 }, 00:23:00.827 { 00:23:00.827 "subsystem": "sock", 00:23:00.827 "config": [ 00:23:00.827 { 00:23:00.827 "method": "sock_impl_set_options", 00:23:00.827 "params": { 00:23:00.827 "impl_name": "posix", 00:23:00.827 "recv_buf_size": 2097152, 00:23:00.827 "send_buf_size": 2097152, 00:23:00.827 "enable_recv_pipe": true, 00:23:00.827 "enable_quickack": false, 00:23:00.827 "enable_placement_id": 0, 00:23:00.827 "enable_zerocopy_send_server": true, 00:23:00.827 "enable_zerocopy_send_client": false, 00:23:00.827 "zerocopy_threshold": 0, 00:23:00.827 "tls_version": 0, 00:23:00.827 "enable_ktls": false 00:23:00.827 } 00:23:00.827 }, 00:23:00.827 { 00:23:00.827 "method": "sock_impl_set_options", 00:23:00.827 "params": { 00:23:00.827 "impl_name": "ssl", 00:23:00.827 "recv_buf_size": 4096, 00:23:00.827 "send_buf_size": 4096, 00:23:00.827 "enable_recv_pipe": true, 00:23:00.827 "enable_quickack": false, 00:23:00.827 "enable_placement_id": 0, 00:23:00.827 "enable_zerocopy_send_server": true, 00:23:00.827 "enable_zerocopy_send_client": false, 00:23:00.827 "zerocopy_threshold": 0, 00:23:00.827 "tls_version": 0, 00:23:00.827 "enable_ktls": false 00:23:00.827 } 00:23:00.827 } 00:23:00.827 ] 00:23:00.827 }, 00:23:00.827 { 00:23:00.827 "subsystem": "vmd", 00:23:00.827 "config": [] 00:23:00.827 }, 00:23:00.827 { 00:23:00.827 "subsystem": "accel", 00:23:00.827 "config": [ 00:23:00.827 { 00:23:00.827 "method": "accel_set_options", 00:23:00.827 "params": { 00:23:00.827 "small_cache_size": 128, 00:23:00.827 "large_cache_size": 16, 00:23:00.827 "task_count": 2048, 00:23:00.827 "sequence_count": 2048, 00:23:00.827 "buf_count": 2048 00:23:00.827 } 00:23:00.827 } 00:23:00.827 ] 00:23:00.827 }, 00:23:00.827 { 00:23:00.827 "subsystem": "bdev", 00:23:00.827 "config": [ 00:23:00.827 { 00:23:00.827 "method": "bdev_set_options", 00:23:00.827 "params": { 00:23:00.828 "bdev_io_pool_size": 65535, 00:23:00.828 "bdev_io_cache_size": 256, 00:23:00.828 "bdev_auto_examine": true, 00:23:00.828 "iobuf_small_cache_size": 128, 00:23:00.828 "iobuf_large_cache_size": 16 00:23:00.828 } 00:23:00.828 }, 00:23:00.828 { 00:23:00.828 "method": "bdev_raid_set_options", 00:23:00.828 "params": { 00:23:00.828 "process_window_size_kb": 1024 00:23:00.828 } 00:23:00.828 }, 00:23:00.828 { 00:23:00.828 "method": "bdev_iscsi_set_options", 00:23:00.828 "params": { 00:23:00.828 "timeout_sec": 30 00:23:00.828 } 00:23:00.828 }, 00:23:00.828 { 00:23:00.828 "method": "bdev_nvme_set_options", 00:23:00.828 "params": { 00:23:00.828 "action_on_timeout": "none", 00:23:00.828 "timeout_us": 0, 00:23:00.828 "timeout_admin_us": 0, 00:23:00.828 "keep_alive_timeout_ms": 10000, 00:23:00.828 "transport_retry_count": 4, 00:23:00.828 "arbitration_burst": 0, 00:23:00.828 "low_priority_weight": 0, 00:23:00.828 "medium_priority_weight": 0, 00:23:00.828 "high_priority_weight": 0, 00:23:00.828 "nvme_adminq_poll_period_us": 10000, 00:23:00.828 "nvme_ioq_poll_period_us": 0, 00:23:00.828 "io_queue_requests": 512, 00:23:00.828 "delay_cmd_submit": true, 00:23:00.828 "bdev_retry_count": 3, 00:23:00.828 "transport_ack_timeout": 0, 00:23:00.828 "ctrlr_loss_timeout_sec": 0, 00:23:00.828 "reconnect_delay_sec": 0, 00:23:00.828 "fast_io_fail_timeout_sec": 0, 00:23:00.828 "generate_uuids": false, 00:23:00.828 "transport_tos": 0, 00:23:00.828 "io_path_stat": false, 00:23:00.828 "allow_accel_sequence": false 00:23:00.828 } 00:23:00.828 }, 00:23:00.828 { 00:23:00.828 "method": "bdev_nvme_attach_controller", 00:23:00.828 "params": { 00:23:00.828 "name": "TLSTEST", 00:23:00.828 "trtype": "TCP", 00:23:00.828 "adrfam": "IPv4", 00:23:00.828 "traddr": "10.0.0.2", 00:23:00.828 "trsvcid": "4420", 00:23:00.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.828 "prchk_reftag": false, 00:23:00.828 "prchk_guard": false, 00:23:00.828 "ctrlr_loss_timeout_sec": 0, 00:23:00.828 "reconnect_delay_sec": 0, 00:23:00.828 "fast_io_fail_timeout_sec": 0, 00:23:00.828 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:23:00.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.828 "hdgst": false, 00:23:00.828 "ddgst": false 00:23:00.828 } 00:23:00.828 }, 00:23:00.828 { 00:23:00.828 "method": "bdev_nvme_set_hotplug", 00:23:00.828 "params": { 00:23:00.828 "period_us": 100000, 00:23:00.828 "enable": false 00:23:00.828 } 00:23:00.828 }, 00:23:00.828 { 00:23:00.828 "method": "bdev_wait_for_examine" 00:23:00.828 } 00:23:00.828 ] 00:23:00.828 }, 00:23:00.828 { 00:23:00.828 "subsystem": "nbd", 00:23:00.828 "config": [] 00:23:00.828 } 00:23:00.828 ] 00:23:00.828 }' 00:23:00.828 10:18:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:00.828 10:18:14 -- common/autotest_common.sh@10 -- # set +x 00:23:01.085 [2024-04-24 10:18:14.111122] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:01.085 [2024-04-24 10:18:14.111172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353908 ] 00:23:01.085 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.085 [2024-04-24 10:18:14.160725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.085 [2024-04-24 10:18:14.229877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.085 [2024-04-24 10:18:14.363672] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.648 10:18:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:01.648 10:18:14 -- common/autotest_common.sh@852 -- # return 0 00:23:01.648 10:18:14 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:01.905 Running I/O for 10 seconds... 00:23:11.933 00:23:11.933 Latency(us) 00:23:11.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.933 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:11.933 Verification LBA range: start 0x0 length 0x2000 00:23:11.933 TLSTESTn1 : 10.02 3946.71 15.42 0.00 0.00 32393.51 5385.35 52884.70 00:23:11.933 =================================================================================================================== 00:23:11.933 Total : 3946.71 15.42 0.00 0.00 32393.51 5385.35 52884.70 00:23:11.933 0 00:23:11.933 10:18:25 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.933 10:18:25 -- target/tls.sh@223 -- # killprocess 353908 00:23:11.933 10:18:25 -- common/autotest_common.sh@926 -- # '[' -z 353908 ']' 00:23:11.933 10:18:25 -- common/autotest_common.sh@930 -- # kill -0 353908 00:23:11.933 10:18:25 -- common/autotest_common.sh@931 -- # uname 00:23:11.933 10:18:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:11.933 10:18:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 353908 00:23:11.933 10:18:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:11.933 10:18:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:11.933 10:18:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 353908' 00:23:11.933 killing process with pid 353908 00:23:11.933 10:18:25 -- common/autotest_common.sh@945 -- # kill 353908 00:23:11.933 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.933 00:23:11.933 Latency(us) 00:23:11.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.933 =================================================================================================================== 00:23:11.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.933 10:18:25 -- common/autotest_common.sh@950 -- # wait 353908 00:23:12.191 10:18:25 -- target/tls.sh@224 -- # killprocess 353772 00:23:12.191 10:18:25 -- common/autotest_common.sh@926 -- # '[' -z 353772 ']' 00:23:12.191 10:18:25 -- common/autotest_common.sh@930 -- # kill -0 353772 00:23:12.191 10:18:25 -- common/autotest_common.sh@931 -- # uname 00:23:12.191 10:18:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:12.191 10:18:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 353772 00:23:12.191 10:18:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:12.191 10:18:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:12.191 10:18:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 353772' 00:23:12.191 killing process with pid 353772 00:23:12.191 10:18:25 -- common/autotest_common.sh@945 -- # kill 353772 00:23:12.191 10:18:25 -- common/autotest_common.sh@950 -- # wait 353772 00:23:12.450 10:18:25 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:23:12.450 10:18:25 -- target/tls.sh@227 -- # cleanup 00:23:12.450 10:18:25 -- target/tls.sh@15 -- # process_shm --id 0 00:23:12.450 10:18:25 -- common/autotest_common.sh@796 -- # type=--id 00:23:12.450 10:18:25 -- common/autotest_common.sh@797 -- # id=0 00:23:12.450 10:18:25 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:23:12.450 10:18:25 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:12.450 10:18:25 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:23:12.450 10:18:25 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:23:12.450 10:18:25 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:23:12.450 10:18:25 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:12.450 nvmf_trace.0 00:23:12.450 10:18:25 -- common/autotest_common.sh@811 -- # return 0 00:23:12.450 10:18:25 -- target/tls.sh@16 -- # killprocess 353908 00:23:12.450 10:18:25 -- common/autotest_common.sh@926 -- # '[' -z 353908 ']' 00:23:12.450 10:18:25 -- common/autotest_common.sh@930 -- # kill -0 353908 00:23:12.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (353908) - No such process 00:23:12.450 10:18:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 353908 is not found' 00:23:12.450 Process with pid 353908 is not found 00:23:12.450 10:18:25 -- target/tls.sh@17 -- # nvmftestfini 00:23:12.450 10:18:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:12.450 10:18:25 -- nvmf/common.sh@116 -- # sync 00:23:12.450 10:18:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:12.450 10:18:25 -- nvmf/common.sh@119 -- # set +e 00:23:12.450 10:18:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:12.450 10:18:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:12.450 rmmod nvme_tcp 00:23:12.450 rmmod nvme_fabrics 00:23:12.450 rmmod nvme_keyring 00:23:12.450 10:18:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:12.450 10:18:25 -- nvmf/common.sh@123 -- # set -e 00:23:12.450 10:18:25 -- nvmf/common.sh@124 -- # return 0 00:23:12.450 10:18:25 -- nvmf/common.sh@477 -- # '[' -n 353772 ']' 00:23:12.450 10:18:25 -- nvmf/common.sh@478 -- # killprocess 353772 00:23:12.450 10:18:25 -- common/autotest_common.sh@926 -- # '[' -z 353772 ']' 00:23:12.450 10:18:25 -- common/autotest_common.sh@930 -- # kill -0 353772 00:23:12.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (353772) - No such process 00:23:12.450 10:18:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 353772 is not found' 00:23:12.450 Process with pid 353772 is not found 00:23:12.450 10:18:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:12.450 10:18:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:12.450 10:18:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:12.450 10:18:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:12.450 10:18:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:12.450 10:18:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.450 10:18:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.450 10:18:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.979 10:18:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:14.979 10:18:27 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:23:14.979 00:23:14.979 real 1m12.137s 00:23:14.979 user 1m47.083s 00:23:14.979 sys 0m26.913s 00:23:14.979 10:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.979 10:18:27 -- common/autotest_common.sh@10 -- # set +x 00:23:14.979 ************************************ 00:23:14.979 END TEST nvmf_tls 00:23:14.979 ************************************ 00:23:14.979 10:18:27 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:14.979 10:18:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:14.979 10:18:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:14.979 10:18:27 -- common/autotest_common.sh@10 -- # set +x 00:23:14.979 ************************************ 00:23:14.979 START TEST nvmf_fips 00:23:14.979 ************************************ 00:23:14.979 10:18:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:14.979 * Looking for test storage... 00:23:14.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:14.979 10:18:27 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.979 10:18:27 -- nvmf/common.sh@7 -- # uname -s 00:23:14.979 10:18:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.979 10:18:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.979 10:18:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.979 10:18:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.979 10:18:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.979 10:18:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.979 10:18:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.979 10:18:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.979 10:18:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.979 10:18:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.979 10:18:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.979 10:18:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.979 10:18:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.979 10:18:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.979 10:18:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.980 10:18:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.980 10:18:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.980 10:18:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.980 10:18:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.980 10:18:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.980 10:18:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.980 10:18:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.980 10:18:27 -- paths/export.sh@5 -- # export PATH 00:23:14.980 10:18:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.980 10:18:27 -- nvmf/common.sh@46 -- # : 0 00:23:14.980 10:18:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:14.980 10:18:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:14.980 10:18:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:14.980 10:18:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.980 10:18:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.980 10:18:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:14.980 10:18:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:14.980 10:18:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:14.980 10:18:27 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:14.980 10:18:27 -- fips/fips.sh@89 -- # check_openssl_version 00:23:14.980 10:18:27 -- fips/fips.sh@83 -- # local target=3.0.0 00:23:14.980 10:18:27 -- fips/fips.sh@85 -- # openssl version 00:23:14.980 10:18:27 -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:14.980 10:18:27 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:14.980 10:18:27 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:14.980 10:18:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:14.980 10:18:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:14.980 10:18:27 -- scripts/common.sh@335 -- # IFS=.-: 00:23:14.980 10:18:27 -- scripts/common.sh@335 -- # read -ra ver1 00:23:14.980 10:18:27 -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.980 10:18:27 -- scripts/common.sh@336 -- # read -ra ver2 00:23:14.980 10:18:27 -- scripts/common.sh@337 -- # local 'op=>=' 00:23:14.980 10:18:27 -- scripts/common.sh@339 -- # ver1_l=3 00:23:14.980 10:18:27 -- scripts/common.sh@340 -- # ver2_l=3 00:23:14.980 10:18:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:14.980 10:18:27 -- scripts/common.sh@343 -- # case "$op" in 00:23:14.980 10:18:27 -- scripts/common.sh@347 -- # : 1 00:23:14.980 10:18:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:14.980 10:18:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.980 10:18:27 -- scripts/common.sh@364 -- # decimal 3 00:23:14.980 10:18:27 -- scripts/common.sh@352 -- # local d=3 00:23:14.980 10:18:27 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:14.980 10:18:27 -- scripts/common.sh@354 -- # echo 3 00:23:14.980 10:18:27 -- scripts/common.sh@364 -- # ver1[v]=3 00:23:14.980 10:18:27 -- scripts/common.sh@365 -- # decimal 3 00:23:14.980 10:18:27 -- scripts/common.sh@352 -- # local d=3 00:23:14.980 10:18:27 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:14.980 10:18:27 -- scripts/common.sh@354 -- # echo 3 00:23:14.980 10:18:27 -- scripts/common.sh@365 -- # ver2[v]=3 00:23:14.980 10:18:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:14.980 10:18:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:14.980 10:18:27 -- scripts/common.sh@363 -- # (( v++ )) 00:23:14.980 10:18:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.980 10:18:27 -- scripts/common.sh@364 -- # decimal 0 00:23:14.980 10:18:27 -- scripts/common.sh@352 -- # local d=0 00:23:14.980 10:18:27 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:14.980 10:18:27 -- scripts/common.sh@354 -- # echo 0 00:23:14.980 10:18:27 -- scripts/common.sh@364 -- # ver1[v]=0 00:23:14.980 10:18:27 -- scripts/common.sh@365 -- # decimal 0 00:23:14.980 10:18:27 -- scripts/common.sh@352 -- # local d=0 00:23:14.980 10:18:27 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:14.980 10:18:27 -- scripts/common.sh@354 -- # echo 0 00:23:14.980 10:18:27 -- scripts/common.sh@365 -- # ver2[v]=0 00:23:14.980 10:18:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:14.980 10:18:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:14.980 10:18:27 -- scripts/common.sh@363 -- # (( v++ )) 00:23:14.980 10:18:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.980 10:18:27 -- scripts/common.sh@364 -- # decimal 9 00:23:14.980 10:18:27 -- scripts/common.sh@352 -- # local d=9 00:23:14.980 10:18:27 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:14.980 10:18:27 -- scripts/common.sh@354 -- # echo 9 00:23:14.980 10:18:27 -- scripts/common.sh@364 -- # ver1[v]=9 00:23:14.980 10:18:27 -- scripts/common.sh@365 -- # decimal 0 00:23:14.980 10:18:27 -- scripts/common.sh@352 -- # local d=0 00:23:14.980 10:18:27 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:14.980 10:18:27 -- scripts/common.sh@354 -- # echo 0 00:23:14.980 10:18:27 -- scripts/common.sh@365 -- # ver2[v]=0 00:23:14.980 10:18:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:14.980 10:18:27 -- scripts/common.sh@366 -- # return 0 00:23:14.980 10:18:27 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:14.980 10:18:27 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:14.980 10:18:27 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:14.980 10:18:27 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:14.980 10:18:27 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:14.980 10:18:27 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:14.980 10:18:27 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:14.980 10:18:27 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:23:14.980 10:18:27 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:23:14.980 10:18:27 -- fips/fips.sh@114 -- # build_openssl_config 00:23:14.980 10:18:27 -- fips/fips.sh@37 -- # cat 00:23:14.980 10:18:27 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:14.980 10:18:27 -- fips/fips.sh@58 -- # cat - 00:23:14.980 10:18:27 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:14.980 10:18:27 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:14.980 10:18:27 -- fips/fips.sh@117 -- # mapfile -t providers 00:23:14.980 10:18:27 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:23:14.980 10:18:27 -- fips/fips.sh@117 -- # openssl list -providers 00:23:14.980 10:18:27 -- fips/fips.sh@117 -- # grep name 00:23:14.980 10:18:28 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:14.980 10:18:28 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:14.980 10:18:28 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:14.980 10:18:28 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:14.980 10:18:28 -- common/autotest_common.sh@640 -- # local es=0 00:23:14.980 10:18:28 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:14.980 10:18:28 -- common/autotest_common.sh@628 -- # local arg=openssl 00:23:14.980 10:18:28 -- fips/fips.sh@128 -- # : 00:23:14.980 10:18:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:14.980 10:18:28 -- common/autotest_common.sh@632 -- # type -t openssl 00:23:14.980 10:18:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:14.980 10:18:28 -- common/autotest_common.sh@634 -- # type -P openssl 00:23:14.980 10:18:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:14.980 10:18:28 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:23:14.980 10:18:28 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:23:14.980 10:18:28 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:23:14.980 Error setting digest 00:23:14.980 0082734FD67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:14.980 0082734FD67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:14.980 10:18:28 -- common/autotest_common.sh@643 -- # es=1 00:23:14.980 10:18:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:14.980 10:18:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:14.980 10:18:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:14.980 10:18:28 -- fips/fips.sh@131 -- # nvmftestinit 00:23:14.980 10:18:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:14.980 10:18:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.980 10:18:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:14.980 10:18:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:14.980 10:18:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:14.980 10:18:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.980 10:18:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.980 10:18:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.980 10:18:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:14.980 10:18:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:14.980 10:18:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:14.980 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:23:20.241 10:18:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:20.241 10:18:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:20.241 10:18:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:20.241 10:18:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:20.241 10:18:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:20.241 10:18:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:20.241 10:18:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:20.241 10:18:32 -- nvmf/common.sh@294 -- # net_devs=() 00:23:20.241 10:18:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:20.241 10:18:32 -- nvmf/common.sh@295 -- # e810=() 00:23:20.241 10:18:32 -- nvmf/common.sh@295 -- # local -ga e810 00:23:20.241 10:18:32 -- nvmf/common.sh@296 -- # x722=() 00:23:20.241 10:18:32 -- nvmf/common.sh@296 -- # local -ga x722 00:23:20.241 10:18:32 -- nvmf/common.sh@297 -- # mlx=() 00:23:20.241 10:18:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:20.241 10:18:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.241 10:18:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:20.241 10:18:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:20.241 10:18:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:20.241 10:18:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:20.241 10:18:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:20.241 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:20.241 10:18:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:20.241 10:18:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:20.241 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:20.241 10:18:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:20.241 10:18:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:20.241 10:18:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.241 10:18:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:20.241 10:18:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.241 10:18:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:20.241 Found net devices under 0000:86:00.0: cvl_0_0 00:23:20.241 10:18:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.241 10:18:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:20.241 10:18:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.241 10:18:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:20.241 10:18:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.241 10:18:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:20.241 Found net devices under 0000:86:00.1: cvl_0_1 00:23:20.241 10:18:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.241 10:18:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:20.241 10:18:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:20.241 10:18:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:20.241 10:18:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:20.241 10:18:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.241 10:18:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.241 10:18:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.241 10:18:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:20.241 10:18:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.241 10:18:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.241 10:18:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:20.241 10:18:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.241 10:18:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.241 10:18:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:20.241 10:18:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:20.241 10:18:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.241 10:18:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.241 10:18:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.242 10:18:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.242 10:18:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:20.242 10:18:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.242 10:18:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.242 10:18:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.242 10:18:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:20.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:23:20.242 00:23:20.242 --- 10.0.0.2 ping statistics --- 00:23:20.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.242 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:23:20.242 10:18:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:23:20.242 00:23:20.242 --- 10.0.0.1 ping statistics --- 00:23:20.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.242 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:20.242 10:18:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.242 10:18:32 -- nvmf/common.sh@410 -- # return 0 00:23:20.242 10:18:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:20.242 10:18:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.242 10:18:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:20.242 10:18:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:20.242 10:18:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.242 10:18:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:20.242 10:18:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:20.242 10:18:32 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:20.242 10:18:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:20.242 10:18:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:20.242 10:18:32 -- common/autotest_common.sh@10 -- # set +x 00:23:20.242 10:18:32 -- nvmf/common.sh@469 -- # nvmfpid=359148 00:23:20.242 10:18:32 -- nvmf/common.sh@470 -- # waitforlisten 359148 00:23:20.242 10:18:32 -- common/autotest_common.sh@819 -- # '[' -z 359148 ']' 00:23:20.242 10:18:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.242 10:18:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:20.242 10:18:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.242 10:18:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:20.242 10:18:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:20.242 10:18:32 -- common/autotest_common.sh@10 -- # set +x 00:23:20.242 [2024-04-24 10:18:32.930747] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:20.242 [2024-04-24 10:18:32.930793] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.242 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.242 [2024-04-24 10:18:32.987168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.242 [2024-04-24 10:18:33.056913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:20.242 [2024-04-24 10:18:33.057020] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.242 [2024-04-24 10:18:33.057027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.242 [2024-04-24 10:18:33.057033] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.242 [2024-04-24 10:18:33.057053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.500 10:18:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:20.500 10:18:33 -- common/autotest_common.sh@852 -- # return 0 00:23:20.500 10:18:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:20.500 10:18:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:20.500 10:18:33 -- common/autotest_common.sh@10 -- # set +x 00:23:20.500 10:18:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.500 10:18:33 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:20.500 10:18:33 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:20.500 10:18:33 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:20.500 10:18:33 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:20.500 10:18:33 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:20.500 10:18:33 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:20.500 10:18:33 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:20.500 10:18:33 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:20.759 [2024-04-24 10:18:33.888437] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.759 [2024-04-24 10:18:33.904441] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.759 [2024-04-24 10:18:33.904602] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.759 malloc0 00:23:20.759 10:18:33 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.759 10:18:33 -- fips/fips.sh@148 -- # bdevperf_pid=359371 00:23:20.759 10:18:33 -- fips/fips.sh@149 -- # waitforlisten 359371 /var/tmp/bdevperf.sock 00:23:20.759 10:18:33 -- common/autotest_common.sh@819 -- # '[' -z 359371 ']' 00:23:20.759 10:18:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.759 10:18:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:20.759 10:18:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.759 10:18:33 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.759 10:18:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:20.759 10:18:33 -- common/autotest_common.sh@10 -- # set +x 00:23:20.759 [2024-04-24 10:18:34.016045] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:20.759 [2024-04-24 10:18:34.016098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359371 ] 00:23:21.017 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.017 [2024-04-24 10:18:34.065859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.017 [2024-04-24 10:18:34.135808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.583 10:18:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:21.583 10:18:34 -- common/autotest_common.sh@852 -- # return 0 00:23:21.583 10:18:34 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:21.841 [2024-04-24 10:18:34.938673] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.841 TLSTESTn1 00:23:21.841 10:18:35 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.841 Running I/O for 10 seconds... 00:23:34.043 00:23:34.043 Latency(us) 00:23:34.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.043 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:34.043 Verification LBA range: start 0x0 length 0x2000 00:23:34.044 TLSTESTn1 : 10.03 3963.39 15.48 0.00 0.00 32245.85 6439.62 55164.22 00:23:34.044 =================================================================================================================== 00:23:34.044 Total : 3963.39 15.48 0.00 0.00 32245.85 6439.62 55164.22 00:23:34.044 0 00:23:34.044 10:18:45 -- fips/fips.sh@1 -- # cleanup 00:23:34.044 10:18:45 -- fips/fips.sh@15 -- # process_shm --id 0 00:23:34.044 10:18:45 -- common/autotest_common.sh@796 -- # type=--id 00:23:34.044 10:18:45 -- common/autotest_common.sh@797 -- # id=0 00:23:34.044 10:18:45 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:23:34.044 10:18:45 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:34.044 10:18:45 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:23:34.044 10:18:45 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:23:34.044 10:18:45 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:23:34.044 10:18:45 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:34.044 nvmf_trace.0 00:23:34.044 10:18:45 -- common/autotest_common.sh@811 -- # return 0 00:23:34.044 10:18:45 -- fips/fips.sh@16 -- # killprocess 359371 00:23:34.044 10:18:45 -- common/autotest_common.sh@926 -- # '[' -z 359371 ']' 00:23:34.044 10:18:45 -- common/autotest_common.sh@930 -- # kill -0 359371 00:23:34.044 10:18:45 -- common/autotest_common.sh@931 -- # uname 00:23:34.044 10:18:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:34.044 10:18:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 359371 00:23:34.044 10:18:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:34.044 10:18:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:34.044 10:18:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 359371' 00:23:34.044 killing process with pid 359371 00:23:34.044 10:18:45 -- common/autotest_common.sh@945 -- # kill 359371 00:23:34.044 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.044 00:23:34.044 Latency(us) 00:23:34.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.044 =================================================================================================================== 00:23:34.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.044 10:18:45 -- common/autotest_common.sh@950 -- # wait 359371 00:23:34.044 10:18:45 -- fips/fips.sh@17 -- # nvmftestfini 00:23:34.044 10:18:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:34.044 10:18:45 -- nvmf/common.sh@116 -- # sync 00:23:34.044 10:18:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:34.044 10:18:45 -- nvmf/common.sh@119 -- # set +e 00:23:34.044 10:18:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:34.044 10:18:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:34.044 rmmod nvme_tcp 00:23:34.044 rmmod nvme_fabrics 00:23:34.044 rmmod nvme_keyring 00:23:34.044 10:18:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:34.044 10:18:45 -- nvmf/common.sh@123 -- # set -e 00:23:34.044 10:18:45 -- nvmf/common.sh@124 -- # return 0 00:23:34.044 10:18:45 -- nvmf/common.sh@477 -- # '[' -n 359148 ']' 00:23:34.044 10:18:45 -- nvmf/common.sh@478 -- # killprocess 359148 00:23:34.044 10:18:45 -- common/autotest_common.sh@926 -- # '[' -z 359148 ']' 00:23:34.044 10:18:45 -- common/autotest_common.sh@930 -- # kill -0 359148 00:23:34.044 10:18:45 -- common/autotest_common.sh@931 -- # uname 00:23:34.044 10:18:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:34.044 10:18:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 359148 00:23:34.044 10:18:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:34.044 10:18:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:34.044 10:18:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 359148' 00:23:34.044 killing process with pid 359148 00:23:34.044 10:18:45 -- common/autotest_common.sh@945 -- # kill 359148 00:23:34.044 10:18:45 -- common/autotest_common.sh@950 -- # wait 359148 00:23:34.044 10:18:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:34.044 10:18:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:34.044 10:18:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:34.044 10:18:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.044 10:18:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:34.044 10:18:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.044 10:18:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.044 10:18:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.979 10:18:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:34.979 10:18:47 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:34.979 00:23:34.979 real 0m20.107s 00:23:34.979 user 0m21.719s 00:23:34.979 sys 0m9.084s 00:23:34.979 10:18:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:34.979 10:18:47 -- common/autotest_common.sh@10 -- # set +x 00:23:34.979 ************************************ 00:23:34.980 END TEST nvmf_fips 00:23:34.980 ************************************ 00:23:34.980 10:18:47 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:23:34.980 10:18:47 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:34.980 10:18:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:34.980 10:18:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:34.980 10:18:47 -- common/autotest_common.sh@10 -- # set +x 00:23:34.980 ************************************ 00:23:34.980 START TEST nvmf_fuzz 00:23:34.980 ************************************ 00:23:34.980 10:18:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:34.980 * Looking for test storage... 00:23:34.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:34.980 10:18:48 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.980 10:18:48 -- nvmf/common.sh@7 -- # uname -s 00:23:34.980 10:18:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.980 10:18:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.980 10:18:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.980 10:18:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.980 10:18:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.980 10:18:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.980 10:18:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.980 10:18:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.980 10:18:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.980 10:18:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.980 10:18:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:34.980 10:18:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:34.980 10:18:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.980 10:18:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.980 10:18:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.980 10:18:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.980 10:18:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.980 10:18:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.980 10:18:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.980 10:18:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.980 10:18:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.980 10:18:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.980 10:18:48 -- paths/export.sh@5 -- # export PATH 00:23:34.980 10:18:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.980 10:18:48 -- nvmf/common.sh@46 -- # : 0 00:23:34.980 10:18:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:34.980 10:18:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:34.980 10:18:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:34.980 10:18:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.980 10:18:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.980 10:18:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:34.980 10:18:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:34.980 10:18:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:34.980 10:18:48 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:34.980 10:18:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:34.980 10:18:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.980 10:18:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:34.980 10:18:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:34.980 10:18:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:34.980 10:18:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.980 10:18:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.980 10:18:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.980 10:18:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:34.980 10:18:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:34.980 10:18:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:34.980 10:18:48 -- common/autotest_common.sh@10 -- # set +x 00:23:40.247 10:18:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:40.247 10:18:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:40.247 10:18:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:40.247 10:18:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:40.247 10:18:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:40.247 10:18:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:40.247 10:18:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:40.247 10:18:53 -- nvmf/common.sh@294 -- # net_devs=() 00:23:40.247 10:18:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:40.247 10:18:53 -- nvmf/common.sh@295 -- # e810=() 00:23:40.247 10:18:53 -- nvmf/common.sh@295 -- # local -ga e810 00:23:40.247 10:18:53 -- nvmf/common.sh@296 -- # x722=() 00:23:40.247 10:18:53 -- nvmf/common.sh@296 -- # local -ga x722 00:23:40.247 10:18:53 -- nvmf/common.sh@297 -- # mlx=() 00:23:40.247 10:18:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:40.247 10:18:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.247 10:18:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:40.247 10:18:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:40.247 10:18:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:40.247 10:18:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:40.247 10:18:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:40.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:40.247 10:18:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:40.247 10:18:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:40.247 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:40.247 10:18:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:40.247 10:18:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:40.248 10:18:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:40.248 10:18:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:40.248 10:18:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:40.248 10:18:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.248 10:18:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:40.248 10:18:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.248 10:18:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:40.248 Found net devices under 0000:86:00.0: cvl_0_0 00:23:40.248 10:18:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.248 10:18:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:40.248 10:18:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.248 10:18:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:40.248 10:18:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.248 10:18:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:40.248 Found net devices under 0000:86:00.1: cvl_0_1 00:23:40.248 10:18:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.248 10:18:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:40.248 10:18:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:40.248 10:18:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:40.248 10:18:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:40.248 10:18:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:40.248 10:18:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.248 10:18:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.248 10:18:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.248 10:18:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:40.248 10:18:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.248 10:18:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.248 10:18:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:40.248 10:18:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.248 10:18:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.248 10:18:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:40.248 10:18:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:40.248 10:18:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.248 10:18:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.248 10:18:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.248 10:18:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.248 10:18:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:40.248 10:18:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.248 10:18:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.248 10:18:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.248 10:18:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:40.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:23:40.248 00:23:40.248 --- 10.0.0.2 ping statistics --- 00:23:40.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.248 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:23:40.248 10:18:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:23:40.248 00:23:40.248 --- 10.0.0.1 ping statistics --- 00:23:40.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.248 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:40.248 10:18:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.248 10:18:53 -- nvmf/common.sh@410 -- # return 0 00:23:40.248 10:18:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:40.248 10:18:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.248 10:18:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:40.248 10:18:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:40.248 10:18:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.248 10:18:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:40.248 10:18:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:40.248 10:18:53 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=364773 00:23:40.248 10:18:53 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:40.248 10:18:53 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:40.248 10:18:53 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 364773 00:23:40.248 10:18:53 -- common/autotest_common.sh@819 -- # '[' -z 364773 ']' 00:23:40.248 10:18:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.248 10:18:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:40.248 10:18:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.248 10:18:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:40.248 10:18:53 -- common/autotest_common.sh@10 -- # set +x 00:23:41.183 10:18:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:41.183 10:18:54 -- common/autotest_common.sh@852 -- # return 0 00:23:41.183 10:18:54 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.183 10:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:41.183 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:23:41.183 10:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:41.183 10:18:54 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:41.183 10:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:41.183 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:23:41.183 Malloc0 00:23:41.183 10:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:41.183 10:18:54 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.183 10:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:41.183 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:23:41.183 10:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:41.183 10:18:54 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.183 10:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:41.183 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:23:41.183 10:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:41.183 10:18:54 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.183 10:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:41.183 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:23:41.183 10:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:41.183 10:18:54 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:41.183 10:18:54 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:13.269 Fuzzing completed. Shutting down the fuzz application 00:24:13.269 00:24:13.269 Dumping successful admin opcodes: 00:24:13.269 8, 9, 10, 24, 00:24:13.269 Dumping successful io opcodes: 00:24:13.269 0, 9, 00:24:13.269 NS: 0x200003aeff00 I/O qp, Total commands completed: 879987, total successful commands: 5124, random_seed: 1232351488 00:24:13.270 NS: 0x200003aeff00 admin qp, Total commands completed: 82653, total successful commands: 659, random_seed: 2845171136 00:24:13.270 10:19:24 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:13.270 Fuzzing completed. Shutting down the fuzz application 00:24:13.270 00:24:13.270 Dumping successful admin opcodes: 00:24:13.270 24, 00:24:13.270 Dumping successful io opcodes: 00:24:13.270 00:24:13.270 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1217812080 00:24:13.270 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1217899564 00:24:13.270 10:19:25 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.270 10:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:13.270 10:19:25 -- common/autotest_common.sh@10 -- # set +x 00:24:13.270 10:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:13.270 10:19:25 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:13.270 10:19:25 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:13.270 10:19:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:13.270 10:19:25 -- nvmf/common.sh@116 -- # sync 00:24:13.270 10:19:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:13.270 10:19:25 -- nvmf/common.sh@119 -- # set +e 00:24:13.270 10:19:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:13.270 10:19:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:13.270 rmmod nvme_tcp 00:24:13.270 rmmod nvme_fabrics 00:24:13.270 rmmod nvme_keyring 00:24:13.270 10:19:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:13.270 10:19:26 -- nvmf/common.sh@123 -- # set -e 00:24:13.270 10:19:26 -- nvmf/common.sh@124 -- # return 0 00:24:13.270 10:19:26 -- nvmf/common.sh@477 -- # '[' -n 364773 ']' 00:24:13.270 10:19:26 -- nvmf/common.sh@478 -- # killprocess 364773 00:24:13.270 10:19:26 -- common/autotest_common.sh@926 -- # '[' -z 364773 ']' 00:24:13.270 10:19:26 -- common/autotest_common.sh@930 -- # kill -0 364773 00:24:13.270 10:19:26 -- common/autotest_common.sh@931 -- # uname 00:24:13.270 10:19:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:13.270 10:19:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 364773 00:24:13.270 10:19:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:13.270 10:19:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:13.270 10:19:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 364773' 00:24:13.270 killing process with pid 364773 00:24:13.270 10:19:26 -- common/autotest_common.sh@945 -- # kill 364773 00:24:13.270 10:19:26 -- common/autotest_common.sh@950 -- # wait 364773 00:24:13.270 10:19:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:13.270 10:19:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:13.270 10:19:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:13.270 10:19:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.270 10:19:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:13.270 10:19:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.270 10:19:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.270 10:19:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.174 10:19:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:15.174 10:19:28 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:15.174 00:24:15.174 real 0m40.470s 00:24:15.174 user 0m53.155s 00:24:15.174 sys 0m16.554s 00:24:15.174 10:19:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.174 10:19:28 -- common/autotest_common.sh@10 -- # set +x 00:24:15.174 ************************************ 00:24:15.174 END TEST nvmf_fuzz 00:24:15.174 ************************************ 00:24:15.433 10:19:28 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:15.434 10:19:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:15.434 10:19:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:15.434 10:19:28 -- common/autotest_common.sh@10 -- # set +x 00:24:15.434 ************************************ 00:24:15.434 START TEST nvmf_multiconnection 00:24:15.434 ************************************ 00:24:15.434 10:19:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:15.434 * Looking for test storage... 00:24:15.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:15.434 10:19:28 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.434 10:19:28 -- nvmf/common.sh@7 -- # uname -s 00:24:15.434 10:19:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.434 10:19:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.434 10:19:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.434 10:19:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.434 10:19:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.434 10:19:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.434 10:19:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.434 10:19:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.434 10:19:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.434 10:19:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.434 10:19:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:15.434 10:19:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:15.434 10:19:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.434 10:19:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.434 10:19:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.434 10:19:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.434 10:19:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.434 10:19:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.434 10:19:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.434 10:19:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.434 10:19:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.434 10:19:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.434 10:19:28 -- paths/export.sh@5 -- # export PATH 00:24:15.434 10:19:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.434 10:19:28 -- nvmf/common.sh@46 -- # : 0 00:24:15.434 10:19:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:15.434 10:19:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:15.434 10:19:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:15.434 10:19:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.434 10:19:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.434 10:19:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:15.434 10:19:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:15.434 10:19:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:15.434 10:19:28 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:15.434 10:19:28 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:15.434 10:19:28 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:15.434 10:19:28 -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:15.434 10:19:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:15.434 10:19:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.434 10:19:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:15.434 10:19:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:15.434 10:19:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:15.434 10:19:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.434 10:19:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.434 10:19:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.434 10:19:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:15.434 10:19:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:15.434 10:19:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:15.434 10:19:28 -- common/autotest_common.sh@10 -- # set +x 00:24:20.700 10:19:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:20.700 10:19:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:20.700 10:19:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:20.700 10:19:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:20.700 10:19:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:20.700 10:19:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:20.700 10:19:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:20.700 10:19:33 -- nvmf/common.sh@294 -- # net_devs=() 00:24:20.700 10:19:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:20.700 10:19:33 -- nvmf/common.sh@295 -- # e810=() 00:24:20.700 10:19:33 -- nvmf/common.sh@295 -- # local -ga e810 00:24:20.700 10:19:33 -- nvmf/common.sh@296 -- # x722=() 00:24:20.700 10:19:33 -- nvmf/common.sh@296 -- # local -ga x722 00:24:20.700 10:19:33 -- nvmf/common.sh@297 -- # mlx=() 00:24:20.700 10:19:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:20.700 10:19:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.700 10:19:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:20.700 10:19:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:20.700 10:19:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:20.700 10:19:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:20.700 10:19:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:20.700 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:20.700 10:19:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:20.700 10:19:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:20.700 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:20.700 10:19:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:20.700 10:19:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:20.700 10:19:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.700 10:19:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:20.700 10:19:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.700 10:19:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:20.700 Found net devices under 0000:86:00.0: cvl_0_0 00:24:20.700 10:19:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.700 10:19:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:20.700 10:19:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.700 10:19:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:20.700 10:19:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.700 10:19:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:20.700 Found net devices under 0000:86:00.1: cvl_0_1 00:24:20.700 10:19:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.700 10:19:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:20.700 10:19:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:20.700 10:19:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:20.700 10:19:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:20.700 10:19:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.700 10:19:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.700 10:19:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.700 10:19:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:20.700 10:19:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.700 10:19:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.700 10:19:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:20.700 10:19:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.700 10:19:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.700 10:19:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:20.700 10:19:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:20.700 10:19:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.700 10:19:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.700 10:19:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.700 10:19:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.700 10:19:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:20.700 10:19:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.700 10:19:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.958 10:19:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.958 10:19:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:20.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:24:20.958 00:24:20.958 --- 10.0.0.2 ping statistics --- 00:24:20.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.958 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:20.958 10:19:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:24:20.958 00:24:20.958 --- 10.0.0.1 ping statistics --- 00:24:20.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.958 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:24:20.958 10:19:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.958 10:19:34 -- nvmf/common.sh@410 -- # return 0 00:24:20.958 10:19:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:20.958 10:19:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.958 10:19:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:20.958 10:19:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:20.958 10:19:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.958 10:19:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:20.958 10:19:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:20.958 10:19:34 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:20.958 10:19:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:20.958 10:19:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:20.958 10:19:34 -- common/autotest_common.sh@10 -- # set +x 00:24:20.958 10:19:34 -- nvmf/common.sh@469 -- # nvmfpid=373656 00:24:20.958 10:19:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:20.958 10:19:34 -- nvmf/common.sh@470 -- # waitforlisten 373656 00:24:20.958 10:19:34 -- common/autotest_common.sh@819 -- # '[' -z 373656 ']' 00:24:20.958 10:19:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.958 10:19:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:20.958 10:19:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.958 10:19:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:20.958 10:19:34 -- common/autotest_common.sh@10 -- # set +x 00:24:20.958 [2024-04-24 10:19:34.097693] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:20.958 [2024-04-24 10:19:34.097739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.958 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.958 [2024-04-24 10:19:34.155760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.958 [2024-04-24 10:19:34.235438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:20.958 [2024-04-24 10:19:34.235545] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.958 [2024-04-24 10:19:34.235553] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.958 [2024-04-24 10:19:34.235559] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.958 [2024-04-24 10:19:34.235604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.958 [2024-04-24 10:19:34.235624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.958 [2024-04-24 10:19:34.235695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.958 [2024-04-24 10:19:34.235698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.891 10:19:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:21.891 10:19:34 -- common/autotest_common.sh@852 -- # return 0 00:24:21.891 10:19:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:21.891 10:19:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:21.891 10:19:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.891 10:19:34 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.891 10:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 [2024-04-24 10:19:34.954384] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.891 10:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:34 -- target/multiconnection.sh@21 -- # seq 1 11 00:24:21.891 10:19:34 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.891 10:19:34 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:21.891 10:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 Malloc1 00:24:21.891 10:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:34 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:21.891 10:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:34 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:21.891 10:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 [2024-04-24 10:19:35.010102] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.891 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 Malloc2 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.891 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 Malloc3 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.891 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 Malloc4 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.891 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 Malloc5 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.891 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.891 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:21.891 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.891 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.149 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.149 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:22.149 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.149 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.149 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.149 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.149 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:22.149 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.149 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.149 Malloc6 00:24:22.149 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.149 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:22.149 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.149 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.149 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.149 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:22.149 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.149 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.149 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.150 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 Malloc7 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.150 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 Malloc8 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.150 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 Malloc9 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.150 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 Malloc10 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.150 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.150 10:19:35 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.150 10:19:35 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:22.150 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.150 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.407 Malloc11 00:24:22.407 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.407 10:19:35 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:22.407 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.407 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.407 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.407 10:19:35 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:22.407 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.407 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.407 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.407 10:19:35 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:22.407 10:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.407 10:19:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.407 10:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.407 10:19:35 -- target/multiconnection.sh@28 -- # seq 1 11 00:24:22.407 10:19:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.407 10:19:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:23.797 10:19:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:23.797 10:19:36 -- common/autotest_common.sh@1177 -- # local i=0 00:24:23.797 10:19:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.797 10:19:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:23.797 10:19:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:25.697 10:19:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:25.697 10:19:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:25.697 10:19:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:24:25.697 10:19:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:25.697 10:19:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.697 10:19:38 -- common/autotest_common.sh@1187 -- # return 0 00:24:25.697 10:19:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.697 10:19:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:26.629 10:19:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:26.629 10:19:39 -- common/autotest_common.sh@1177 -- # local i=0 00:24:26.629 10:19:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.629 10:19:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:26.629 10:19:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:29.193 10:19:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:29.193 10:19:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:29.193 10:19:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:24:29.193 10:19:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:29.193 10:19:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:29.193 10:19:41 -- common/autotest_common.sh@1187 -- # return 0 00:24:29.193 10:19:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:29.193 10:19:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:29.757 10:19:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:29.757 10:19:42 -- common/autotest_common.sh@1177 -- # local i=0 00:24:29.757 10:19:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.757 10:19:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:29.757 10:19:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:32.280 10:19:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:32.280 10:19:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:32.280 10:19:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:24:32.280 10:19:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:32.280 10:19:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.280 10:19:45 -- common/autotest_common.sh@1187 -- # return 0 00:24:32.280 10:19:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.280 10:19:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:33.210 10:19:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:33.210 10:19:46 -- common/autotest_common.sh@1177 -- # local i=0 00:24:33.210 10:19:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:33.210 10:19:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:33.210 10:19:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:35.104 10:19:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:35.104 10:19:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:35.104 10:19:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:24:35.104 10:19:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:35.104 10:19:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:35.104 10:19:48 -- common/autotest_common.sh@1187 -- # return 0 00:24:35.104 10:19:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.104 10:19:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:36.473 10:19:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:36.473 10:19:49 -- common/autotest_common.sh@1177 -- # local i=0 00:24:36.473 10:19:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.473 10:19:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:36.473 10:19:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:38.366 10:19:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:38.366 10:19:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:38.366 10:19:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:24:38.366 10:19:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:38.366 10:19:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:38.366 10:19:51 -- common/autotest_common.sh@1187 -- # return 0 00:24:38.366 10:19:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.366 10:19:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:39.736 10:19:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:39.736 10:19:52 -- common/autotest_common.sh@1177 -- # local i=0 00:24:39.736 10:19:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.736 10:19:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:39.736 10:19:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:41.630 10:19:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:41.630 10:19:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:41.630 10:19:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:24:41.630 10:19:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:41.630 10:19:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:41.630 10:19:54 -- common/autotest_common.sh@1187 -- # return 0 00:24:41.630 10:19:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.630 10:19:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:42.999 10:19:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:42.999 10:19:55 -- common/autotest_common.sh@1177 -- # local i=0 00:24:42.999 10:19:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:42.999 10:19:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:42.999 10:19:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:44.895 10:19:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:44.895 10:19:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:44.895 10:19:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:24:44.895 10:19:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:44.895 10:19:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.895 10:19:57 -- common/autotest_common.sh@1187 -- # return 0 00:24:44.895 10:19:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:44.895 10:19:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:46.265 10:19:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:46.265 10:19:59 -- common/autotest_common.sh@1177 -- # local i=0 00:24:46.265 10:19:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.265 10:19:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:46.265 10:19:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:48.158 10:20:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:48.158 10:20:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:48.158 10:20:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:24:48.158 10:20:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:48.158 10:20:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.158 10:20:01 -- common/autotest_common.sh@1187 -- # return 0 00:24:48.158 10:20:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.158 10:20:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:49.531 10:20:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:49.531 10:20:02 -- common/autotest_common.sh@1177 -- # local i=0 00:24:49.531 10:20:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:49.531 10:20:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:49.531 10:20:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:51.425 10:20:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:51.425 10:20:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:51.425 10:20:04 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:24:51.425 10:20:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:51.425 10:20:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.425 10:20:04 -- common/autotest_common.sh@1187 -- # return 0 00:24:51.425 10:20:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.425 10:20:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:52.795 10:20:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:52.795 10:20:05 -- common/autotest_common.sh@1177 -- # local i=0 00:24:52.795 10:20:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.795 10:20:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:52.795 10:20:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:54.689 10:20:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:54.689 10:20:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:54.689 10:20:07 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:24:54.689 10:20:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:54.689 10:20:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.689 10:20:07 -- common/autotest_common.sh@1187 -- # return 0 00:24:54.689 10:20:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.689 10:20:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:56.586 10:20:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:56.586 10:20:09 -- common/autotest_common.sh@1177 -- # local i=0 00:24:56.586 10:20:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.586 10:20:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:56.586 10:20:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:58.509 10:20:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:58.509 10:20:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:58.509 10:20:11 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:24:58.509 10:20:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:58.509 10:20:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.509 10:20:11 -- common/autotest_common.sh@1187 -- # return 0 00:24:58.509 10:20:11 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:58.509 [global] 00:24:58.509 thread=1 00:24:58.509 invalidate=1 00:24:58.509 rw=read 00:24:58.509 time_based=1 00:24:58.509 runtime=10 00:24:58.509 ioengine=libaio 00:24:58.509 direct=1 00:24:58.509 bs=262144 00:24:58.509 iodepth=64 00:24:58.509 norandommap=1 00:24:58.509 numjobs=1 00:24:58.509 00:24:58.509 [job0] 00:24:58.509 filename=/dev/nvme0n1 00:24:58.509 [job1] 00:24:58.509 filename=/dev/nvme10n1 00:24:58.509 [job2] 00:24:58.509 filename=/dev/nvme1n1 00:24:58.509 [job3] 00:24:58.509 filename=/dev/nvme2n1 00:24:58.509 [job4] 00:24:58.509 filename=/dev/nvme3n1 00:24:58.509 [job5] 00:24:58.509 filename=/dev/nvme4n1 00:24:58.509 [job6] 00:24:58.509 filename=/dev/nvme5n1 00:24:58.509 [job7] 00:24:58.509 filename=/dev/nvme6n1 00:24:58.509 [job8] 00:24:58.509 filename=/dev/nvme7n1 00:24:58.509 [job9] 00:24:58.509 filename=/dev/nvme8n1 00:24:58.509 [job10] 00:24:58.509 filename=/dev/nvme9n1 00:24:58.509 Could not set queue depth (nvme0n1) 00:24:58.509 Could not set queue depth (nvme10n1) 00:24:58.509 Could not set queue depth (nvme1n1) 00:24:58.509 Could not set queue depth (nvme2n1) 00:24:58.509 Could not set queue depth (nvme3n1) 00:24:58.509 Could not set queue depth (nvme4n1) 00:24:58.509 Could not set queue depth (nvme5n1) 00:24:58.509 Could not set queue depth (nvme6n1) 00:24:58.509 Could not set queue depth (nvme7n1) 00:24:58.509 Could not set queue depth (nvme8n1) 00:24:58.509 Could not set queue depth (nvme9n1) 00:24:58.770 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:58.770 fio-3.35 00:24:58.770 Starting 11 threads 00:25:10.975 00:25:10.975 job0: (groupid=0, jobs=1): err= 0: pid=380250: Wed Apr 24 10:20:22 2024 00:25:10.975 read: IOPS=828, BW=207MiB/s (217MB/s)(2090MiB/10087msec) 00:25:10.975 slat (usec): min=7, max=101835, avg=899.98, stdev=3768.05 00:25:10.975 clat (usec): min=969, max=246043, avg=76209.87, stdev=41569.52 00:25:10.975 lat (usec): min=996, max=246080, avg=77109.85, stdev=42149.89 00:25:10.975 clat percentiles (msec): 00:25:10.975 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 27], 20.00th=[ 32], 00:25:10.975 | 30.00th=[ 49], 40.00th=[ 62], 50.00th=[ 74], 60.00th=[ 87], 00:25:10.975 | 70.00th=[ 100], 80.00th=[ 113], 90.00th=[ 134], 95.00th=[ 150], 00:25:10.975 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 203], 99.95th=[ 207], 00:25:10.975 | 99.99th=[ 247] 00:25:10.975 bw ( KiB/s): min=98816, max=410624, per=9.45%, avg=212403.20, stdev=81842.03, samples=20 00:25:10.975 iops : min= 386, max= 1604, avg=829.70, stdev=319.70, samples=20 00:25:10.975 lat (usec) : 1000=0.02% 00:25:10.975 lat (msec) : 2=0.47%, 4=0.29%, 10=1.33%, 20=4.23%, 50=25.10% 00:25:10.975 lat (msec) : 100=39.13%, 250=29.44% 00:25:10.975 cpu : usr=0.30%, sys=3.09%, ctx=2134, majf=0, minf=4097 00:25:10.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:10.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.975 issued rwts: total=8360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.975 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.975 job1: (groupid=0, jobs=1): err= 0: pid=380263: Wed Apr 24 10:20:22 2024 00:25:10.975 read: IOPS=784, BW=196MiB/s (206MB/s)(1977MiB/10077msec) 00:25:10.975 slat (usec): min=8, max=77115, avg=823.12, stdev=3246.62 00:25:10.975 clat (msec): min=2, max=238, avg=80.64, stdev=38.23 00:25:10.975 lat (msec): min=2, max=244, avg=81.47, stdev=38.69 00:25:10.975 clat percentiles (msec): 00:25:10.975 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 30], 20.00th=[ 45], 00:25:10.975 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 91], 00:25:10.975 | 70.00th=[ 101], 80.00th=[ 110], 90.00th=[ 126], 95.00th=[ 153], 00:25:10.975 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 197], 99.95th=[ 205], 00:25:10.975 | 99.99th=[ 239] 00:25:10.975 bw ( KiB/s): min=138752, max=314368, per=8.94%, avg=200774.25, stdev=45357.90, samples=20 00:25:10.975 iops : min= 542, max= 1228, avg=784.25, stdev=177.18, samples=20 00:25:10.976 lat (msec) : 4=0.10%, 10=2.02%, 20=2.42%, 50=18.43%, 100=47.27% 00:25:10.976 lat (msec) : 250=29.76% 00:25:10.976 cpu : usr=0.27%, sys=2.84%, ctx=2067, majf=0, minf=4097 00:25:10.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:10.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.976 issued rwts: total=7906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.976 job2: (groupid=0, jobs=1): err= 0: pid=380281: Wed Apr 24 10:20:22 2024 00:25:10.976 read: IOPS=942, BW=236MiB/s (247MB/s)(2374MiB/10077msec) 00:25:10.976 slat (usec): min=9, max=77480, avg=903.32, stdev=2878.98 00:25:10.976 clat (msec): min=2, max=174, avg=66.90, stdev=31.47 00:25:10.976 lat (msec): min=2, max=192, avg=67.80, stdev=31.89 00:25:10.976 clat percentiles (msec): 00:25:10.976 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 27], 20.00th=[ 32], 00:25:10.976 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 69], 60.00th=[ 75], 00:25:10.976 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 118], 00:25:10.976 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 165], 99.95th=[ 167], 00:25:10.976 | 99.99th=[ 176] 00:25:10.976 bw ( KiB/s): min=142336, max=482304, per=10.75%, avg=241484.80, stdev=92537.88, samples=20 00:25:10.976 iops : min= 556, max= 1884, avg=943.30, stdev=361.48, samples=20 00:25:10.976 lat (msec) : 4=0.17%, 10=1.25%, 20=3.11%, 50=27.51%, 100=52.77% 00:25:10.976 lat (msec) : 250=15.20% 00:25:10.976 cpu : usr=0.36%, sys=3.63%, ctx=2126, majf=0, minf=4097 00:25:10.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:10.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.976 issued rwts: total=9496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.976 job3: (groupid=0, jobs=1): err= 0: pid=380291: Wed Apr 24 10:20:22 2024 00:25:10.976 read: IOPS=835, BW=209MiB/s (219MB/s)(2108MiB/10088msec) 00:25:10.976 slat (usec): min=7, max=144076, avg=733.48, stdev=3970.63 00:25:10.976 clat (msec): min=2, max=246, avg=75.73, stdev=41.15 00:25:10.976 lat (msec): min=2, max=327, avg=76.47, stdev=41.68 00:25:10.976 clat percentiles (msec): 00:25:10.976 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 23], 20.00th=[ 36], 00:25:10.976 | 30.00th=[ 49], 40.00th=[ 64], 50.00th=[ 79], 60.00th=[ 87], 00:25:10.976 | 70.00th=[ 94], 80.00th=[ 107], 90.00th=[ 131], 95.00th=[ 146], 00:25:10.976 | 99.00th=[ 186], 99.50th=[ 199], 99.90th=[ 207], 99.95th=[ 207], 00:25:10.976 | 99.99th=[ 247] 00:25:10.976 bw ( KiB/s): min=136704, max=351744, per=9.54%, avg=214246.40, stdev=60163.92, samples=20 00:25:10.976 iops : min= 534, max= 1374, avg=836.90, stdev=235.02, samples=20 00:25:10.976 lat (msec) : 4=0.46%, 10=2.49%, 20=5.88%, 50=22.33%, 100=44.52% 00:25:10.976 lat (msec) : 250=24.31% 00:25:10.976 cpu : usr=0.30%, sys=2.85%, ctx=2311, majf=0, minf=4097 00:25:10.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:10.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.976 issued rwts: total=8432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.976 job4: (groupid=0, jobs=1): err= 0: pid=380297: Wed Apr 24 10:20:22 2024 00:25:10.976 read: IOPS=955, BW=239MiB/s (251MB/s)(2395MiB/10024msec) 00:25:10.976 slat (usec): min=10, max=109115, avg=797.08, stdev=3246.63 00:25:10.976 clat (usec): min=1164, max=230001, avg=66110.62, stdev=39600.36 00:25:10.976 lat (usec): min=1225, max=255102, avg=66907.69, stdev=40127.77 00:25:10.976 clat percentiles (msec): 00:25:10.976 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 32], 00:25:10.976 | 30.00th=[ 41], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 67], 00:25:10.976 | 70.00th=[ 81], 80.00th=[ 101], 90.00th=[ 124], 95.00th=[ 148], 00:25:10.976 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 197], 99.95th=[ 218], 00:25:10.976 | 99.99th=[ 230] 00:25:10.976 bw ( KiB/s): min=111104, max=476672, per=10.85%, avg=243689.45, stdev=93385.81, samples=20 00:25:10.976 iops : min= 434, max= 1862, avg=951.90, stdev=364.78, samples=20 00:25:10.976 lat (msec) : 2=0.01%, 4=0.52%, 10=3.19%, 20=4.06%, 50=32.19% 00:25:10.976 lat (msec) : 100=40.00%, 250=20.03% 00:25:10.976 cpu : usr=0.36%, sys=3.36%, ctx=2251, majf=0, minf=4097 00:25:10.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:10.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.976 issued rwts: total=9581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.976 job5: (groupid=0, jobs=1): err= 0: pid=380318: Wed Apr 24 10:20:22 2024 00:25:10.976 read: IOPS=754, BW=189MiB/s (198MB/s)(1903MiB/10084msec) 00:25:10.976 slat (usec): min=8, max=158148, avg=970.13, stdev=4129.52 00:25:10.976 clat (msec): min=2, max=237, avg=83.64, stdev=42.20 00:25:10.976 lat (msec): min=2, max=330, avg=84.61, stdev=42.80 00:25:10.976 clat percentiles (msec): 00:25:10.976 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 27], 20.00th=[ 42], 00:25:10.976 | 30.00th=[ 60], 40.00th=[ 77], 50.00th=[ 86], 60.00th=[ 94], 00:25:10.976 | 70.00th=[ 106], 80.00th=[ 117], 90.00th=[ 136], 95.00th=[ 157], 00:25:10.976 | 99.00th=[ 184], 99.50th=[ 232], 99.90th=[ 234], 99.95th=[ 236], 00:25:10.976 | 99.99th=[ 239] 00:25:10.976 bw ( KiB/s): min=104960, max=401408, per=8.60%, avg=193296.10, stdev=72534.29, samples=20 00:25:10.976 iops : min= 410, max= 1568, avg=755.05, stdev=283.34, samples=20 00:25:10.976 lat (msec) : 4=0.03%, 10=1.88%, 20=4.73%, 50=19.13%, 100=39.43% 00:25:10.976 lat (msec) : 250=34.81% 00:25:10.976 cpu : usr=0.28%, sys=2.77%, ctx=1962, majf=0, minf=4097 00:25:10.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:10.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.976 issued rwts: total=7613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.976 job6: (groupid=0, jobs=1): err= 0: pid=380328: Wed Apr 24 10:20:22 2024 00:25:10.976 read: IOPS=799, BW=200MiB/s (210MB/s)(2015MiB/10077msec) 00:25:10.976 slat (usec): min=10, max=123865, avg=698.00, stdev=4314.01 00:25:10.976 clat (usec): min=1517, max=292190, avg=79247.90, stdev=47109.46 00:25:10.976 lat (usec): min=1557, max=301547, avg=79945.90, stdev=47698.77 00:25:10.976 clat percentiles (msec): 00:25:10.976 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 37], 00:25:10.976 | 30.00th=[ 49], 40.00th=[ 58], 50.00th=[ 72], 60.00th=[ 88], 00:25:10.976 | 70.00th=[ 103], 80.00th=[ 125], 90.00th=[ 148], 95.00th=[ 167], 00:25:10.976 | 99.00th=[ 188], 99.50th=[ 197], 99.90th=[ 205], 99.95th=[ 209], 00:25:10.976 | 99.99th=[ 292] 00:25:10.976 bw ( KiB/s): min=111104, max=335360, per=9.11%, avg=204697.60, stdev=63174.64, samples=20 00:25:10.976 iops : min= 434, max= 1310, avg=799.60, stdev=246.78, samples=20 00:25:10.976 lat (msec) : 2=0.01%, 4=0.55%, 10=3.13%, 20=6.01%, 50=22.26% 00:25:10.976 lat (msec) : 100=36.89%, 250=31.13%, 500=0.02% 00:25:10.976 cpu : usr=0.28%, sys=2.89%, ctx=2219, majf=0, minf=4097 00:25:10.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:10.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.976 issued rwts: total=8059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.976 job7: (groupid=0, jobs=1): err= 0: pid=380336: Wed Apr 24 10:20:22 2024 00:25:10.976 read: IOPS=653, BW=163MiB/s (171MB/s)(1645MiB/10075msec) 00:25:10.976 slat (usec): min=10, max=85697, avg=930.79, stdev=3978.55 00:25:10.976 clat (msec): min=2, max=253, avg=96.92, stdev=38.60 00:25:10.976 lat (msec): min=2, max=253, avg=97.85, stdev=39.08 00:25:10.976 clat percentiles (msec): 00:25:10.976 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 49], 20.00th=[ 65], 00:25:10.976 | 30.00th=[ 78], 40.00th=[ 89], 50.00th=[ 100], 60.00th=[ 108], 00:25:10.976 | 70.00th=[ 115], 80.00th=[ 129], 90.00th=[ 146], 95.00th=[ 159], 00:25:10.976 | 99.00th=[ 199], 99.50th=[ 205], 99.90th=[ 215], 99.95th=[ 215], 00:25:10.976 | 99.99th=[ 253] 00:25:10.976 bw ( KiB/s): min=108544, max=228864, per=7.43%, avg=166880.80, stdev=34985.70, samples=20 00:25:10.976 iops : min= 424, max= 894, avg=651.85, stdev=136.63, samples=20 00:25:10.976 lat (msec) : 4=0.06%, 10=0.82%, 20=2.48%, 50=7.61%, 100=40.48% 00:25:10.976 lat (msec) : 250=48.53%, 500=0.02% 00:25:10.976 cpu : usr=0.24%, sys=2.33%, ctx=1927, majf=0, minf=4097 00:25:10.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:10.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.976 issued rwts: total=6581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.976 job8: (groupid=0, jobs=1): err= 0: pid=380358: Wed Apr 24 10:20:22 2024 00:25:10.976 read: IOPS=722, BW=181MiB/s (189MB/s)(1821MiB/10078msec) 00:25:10.976 slat (usec): min=15, max=108405, avg=1130.48, stdev=4142.59 00:25:10.976 clat (usec): min=922, max=265487, avg=87315.01, stdev=38980.35 00:25:10.976 lat (usec): min=966, max=265540, avg=88445.49, stdev=39562.24 00:25:10.976 clat percentiles (msec): 00:25:10.976 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 39], 20.00th=[ 59], 00:25:10.976 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 95], 00:25:10.976 | 70.00th=[ 105], 80.00th=[ 117], 90.00th=[ 136], 95.00th=[ 159], 00:25:10.976 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 232], 99.95th=[ 251], 00:25:10.976 | 99.99th=[ 266] 00:25:10.976 bw ( KiB/s): min=89088, max=285696, per=8.23%, avg=184832.00, stdev=47855.46, samples=20 00:25:10.976 iops : min= 348, max= 1116, avg=722.00, stdev=186.94, samples=20 00:25:10.976 lat (usec) : 1000=0.01% 00:25:10.976 lat (msec) : 2=0.25%, 4=0.43%, 10=1.36%, 20=2.99%, 50=9.67% 00:25:10.976 lat (msec) : 100=50.06%, 250=35.19%, 500=0.04% 00:25:10.976 cpu : usr=0.33%, sys=2.94%, ctx=1713, majf=0, minf=3347 00:25:10.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:10.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.977 issued rwts: total=7283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.977 job9: (groupid=0, jobs=1): err= 0: pid=380366: Wed Apr 24 10:20:22 2024 00:25:10.977 read: IOPS=792, BW=198MiB/s (208MB/s)(1998MiB/10087msec) 00:25:10.977 slat (usec): min=8, max=55109, avg=827.92, stdev=3008.71 00:25:10.977 clat (usec): min=1248, max=212418, avg=79867.96, stdev=39280.44 00:25:10.977 lat (usec): min=1281, max=218476, avg=80695.87, stdev=39776.12 00:25:10.977 clat percentiles (msec): 00:25:10.977 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 26], 20.00th=[ 43], 00:25:10.977 | 30.00th=[ 58], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 91], 00:25:10.977 | 70.00th=[ 101], 80.00th=[ 113], 90.00th=[ 128], 95.00th=[ 144], 00:25:10.977 | 99.00th=[ 186], 99.50th=[ 188], 99.90th=[ 209], 99.95th=[ 213], 00:25:10.977 | 99.99th=[ 213] 00:25:10.977 bw ( KiB/s): min=94720, max=339456, per=9.03%, avg=202931.20, stdev=71001.23, samples=20 00:25:10.977 iops : min= 370, max= 1326, avg=792.70, stdev=277.35, samples=20 00:25:10.977 lat (msec) : 2=0.05%, 4=0.60%, 10=1.55%, 20=3.93%, 50=19.55% 00:25:10.977 lat (msec) : 100=43.44%, 250=30.88% 00:25:10.977 cpu : usr=0.28%, sys=3.02%, ctx=2134, majf=0, minf=4097 00:25:10.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:10.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.977 issued rwts: total=7990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.977 job10: (groupid=0, jobs=1): err= 0: pid=380374: Wed Apr 24 10:20:22 2024 00:25:10.977 read: IOPS=716, BW=179MiB/s (188MB/s)(1807MiB/10082msec) 00:25:10.977 slat (usec): min=10, max=115197, avg=957.79, stdev=4421.84 00:25:10.977 clat (usec): min=1095, max=275503, avg=88240.95, stdev=50088.54 00:25:10.977 lat (usec): min=1122, max=277554, avg=89198.74, stdev=50699.37 00:25:10.977 clat percentiles (msec): 00:25:10.977 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 12], 20.00th=[ 29], 00:25:10.977 | 30.00th=[ 71], 40.00th=[ 84], 50.00th=[ 93], 60.00th=[ 103], 00:25:10.977 | 70.00th=[ 114], 80.00th=[ 131], 90.00th=[ 153], 95.00th=[ 171], 00:25:10.977 | 99.00th=[ 201], 99.50th=[ 205], 99.90th=[ 213], 99.95th=[ 213], 00:25:10.977 | 99.99th=[ 275] 00:25:10.977 bw ( KiB/s): min=107520, max=357888, per=8.16%, avg=183390.75, stdev=66796.67, samples=20 00:25:10.977 iops : min= 420, max= 1398, avg=716.35, stdev=260.93, samples=20 00:25:10.977 lat (msec) : 2=0.22%, 4=2.10%, 10=5.78%, 20=7.31%, 50=9.34% 00:25:10.977 lat (msec) : 100=32.99%, 250=42.21%, 500=0.04% 00:25:10.977 cpu : usr=0.30%, sys=2.57%, ctx=1973, majf=0, minf=4097 00:25:10.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:10.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:10.977 issued rwts: total=7226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:10.977 00:25:10.977 Run status group 0 (all jobs): 00:25:10.977 READ: bw=2194MiB/s (2300MB/s), 163MiB/s-239MiB/s (171MB/s-251MB/s), io=21.6GiB (23.2GB), run=10024-10088msec 00:25:10.977 00:25:10.977 Disk stats (read/write): 00:25:10.977 nvme0n1: ios=16510/0, merge=0/0, ticks=1232165/0, in_queue=1232165, util=97.22% 00:25:10.977 nvme10n1: ios=15623/0, merge=0/0, ticks=1238652/0, in_queue=1238652, util=97.42% 00:25:10.977 nvme1n1: ios=18796/0, merge=0/0, ticks=1229799/0, in_queue=1229799, util=97.72% 00:25:10.977 nvme2n1: ios=16679/0, merge=0/0, ticks=1238743/0, in_queue=1238743, util=97.86% 00:25:10.977 nvme3n1: ios=18800/0, merge=0/0, ticks=1236972/0, in_queue=1236972, util=97.88% 00:25:10.977 nvme4n1: ios=15031/0, merge=0/0, ticks=1232312/0, in_queue=1232312, util=98.24% 00:25:10.977 nvme5n1: ios=15915/0, merge=0/0, ticks=1237090/0, in_queue=1237090, util=98.40% 00:25:10.977 nvme6n1: ios=12965/0, merge=0/0, ticks=1237212/0, in_queue=1237212, util=98.52% 00:25:10.977 nvme7n1: ios=14381/0, merge=0/0, ticks=1230582/0, in_queue=1230582, util=98.93% 00:25:10.977 nvme8n1: ios=15782/0, merge=0/0, ticks=1233391/0, in_queue=1233391, util=99.05% 00:25:10.977 nvme9n1: ios=14268/0, merge=0/0, ticks=1233834/0, in_queue=1233834, util=99.20% 00:25:10.977 10:20:22 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:10.977 [global] 00:25:10.977 thread=1 00:25:10.977 invalidate=1 00:25:10.977 rw=randwrite 00:25:10.977 time_based=1 00:25:10.977 runtime=10 00:25:10.977 ioengine=libaio 00:25:10.977 direct=1 00:25:10.977 bs=262144 00:25:10.977 iodepth=64 00:25:10.977 norandommap=1 00:25:10.977 numjobs=1 00:25:10.977 00:25:10.977 [job0] 00:25:10.977 filename=/dev/nvme0n1 00:25:10.977 [job1] 00:25:10.977 filename=/dev/nvme10n1 00:25:10.977 [job2] 00:25:10.977 filename=/dev/nvme1n1 00:25:10.977 [job3] 00:25:10.977 filename=/dev/nvme2n1 00:25:10.977 [job4] 00:25:10.977 filename=/dev/nvme3n1 00:25:10.977 [job5] 00:25:10.977 filename=/dev/nvme4n1 00:25:10.977 [job6] 00:25:10.977 filename=/dev/nvme5n1 00:25:10.977 [job7] 00:25:10.977 filename=/dev/nvme6n1 00:25:10.977 [job8] 00:25:10.977 filename=/dev/nvme7n1 00:25:10.977 [job9] 00:25:10.977 filename=/dev/nvme8n1 00:25:10.977 [job10] 00:25:10.977 filename=/dev/nvme9n1 00:25:10.977 Could not set queue depth (nvme0n1) 00:25:10.977 Could not set queue depth (nvme10n1) 00:25:10.977 Could not set queue depth (nvme1n1) 00:25:10.977 Could not set queue depth (nvme2n1) 00:25:10.977 Could not set queue depth (nvme3n1) 00:25:10.977 Could not set queue depth (nvme4n1) 00:25:10.977 Could not set queue depth (nvme5n1) 00:25:10.977 Could not set queue depth (nvme6n1) 00:25:10.977 Could not set queue depth (nvme7n1) 00:25:10.977 Could not set queue depth (nvme8n1) 00:25:10.977 Could not set queue depth (nvme9n1) 00:25:10.977 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:10.977 fio-3.35 00:25:10.977 Starting 11 threads 00:25:21.001 00:25:21.001 job0: (groupid=0, jobs=1): err= 0: pid=382013: Wed Apr 24 10:20:33 2024 00:25:21.001 write: IOPS=582, BW=146MiB/s (153MB/s)(1471MiB/10096msec); 0 zone resets 00:25:21.001 slat (usec): min=22, max=30485, avg=1226.75, stdev=3101.53 00:25:21.001 clat (usec): min=1732, max=280198, avg=108553.30, stdev=56363.96 00:25:21.001 lat (usec): min=1807, max=294858, avg=109780.05, stdev=56998.96 00:25:21.001 clat percentiles (msec): 00:25:21.001 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 31], 20.00th=[ 63], 00:25:21.001 | 30.00th=[ 80], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 126], 00:25:21.001 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 169], 95.00th=[ 209], 00:25:21.001 | 99.00th=[ 262], 99.50th=[ 266], 99.90th=[ 275], 99.95th=[ 279], 00:25:21.001 | 99.99th=[ 279] 00:25:21.001 bw ( KiB/s): min=77824, max=235008, per=9.60%, avg=149006.95, stdev=39935.00, samples=20 00:25:21.001 iops : min= 304, max= 918, avg=582.05, stdev=156.01, samples=20 00:25:21.001 lat (msec) : 2=0.02%, 4=0.31%, 10=3.70%, 20=2.74%, 50=9.94% 00:25:21.001 lat (msec) : 100=31.75%, 250=49.85%, 500=1.70% 00:25:21.001 cpu : usr=1.34%, sys=1.80%, ctx=3151, majf=0, minf=1 00:25:21.001 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:21.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.001 issued rwts: total=0,5884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.001 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.001 job1: (groupid=0, jobs=1): err= 0: pid=382025: Wed Apr 24 10:20:33 2024 00:25:21.001 write: IOPS=397, BW=99.4MiB/s (104MB/s)(1009MiB/10145msec); 0 zone resets 00:25:21.001 slat (usec): min=23, max=48361, avg=2407.92, stdev=4844.64 00:25:21.001 clat (msec): min=5, max=307, avg=158.44, stdev=55.45 00:25:21.002 lat (msec): min=5, max=307, avg=160.84, stdev=56.17 00:25:21.002 clat percentiles (msec): 00:25:21.002 | 1.00th=[ 39], 5.00th=[ 73], 10.00th=[ 81], 20.00th=[ 110], 00:25:21.002 | 30.00th=[ 132], 40.00th=[ 146], 50.00th=[ 157], 60.00th=[ 169], 00:25:21.002 | 70.00th=[ 184], 80.00th=[ 209], 90.00th=[ 226], 95.00th=[ 259], 00:25:21.002 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 296], 99.95th=[ 296], 00:25:21.002 | 99.99th=[ 309] 00:25:21.002 bw ( KiB/s): min=61440, max=174080, per=6.55%, avg=101666.00, stdev=31496.68, samples=20 00:25:21.002 iops : min= 240, max= 680, avg=397.10, stdev=123.05, samples=20 00:25:21.002 lat (msec) : 10=0.10%, 20=0.22%, 50=1.66%, 100=15.37%, 250=76.48% 00:25:21.002 lat (msec) : 500=6.17% 00:25:21.002 cpu : usr=1.01%, sys=1.40%, ctx=1212, majf=0, minf=1 00:25:21.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:21.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.002 issued rwts: total=0,4035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.002 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.002 job2: (groupid=0, jobs=1): err= 0: pid=382026: Wed Apr 24 10:20:33 2024 00:25:21.002 write: IOPS=613, BW=153MiB/s (161MB/s)(1552MiB/10119msec); 0 zone resets 00:25:21.002 slat (usec): min=19, max=88184, avg=1212.79, stdev=3616.78 00:25:21.002 clat (msec): min=2, max=308, avg=103.04, stdev=68.10 00:25:21.002 lat (msec): min=2, max=308, avg=104.26, stdev=68.75 00:25:21.002 clat percentiles (msec): 00:25:21.002 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 40], 20.00th=[ 45], 00:25:21.002 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 83], 00:25:21.002 | 70.00th=[ 131], 80.00th=[ 174], 90.00th=[ 218], 95.00th=[ 239], 00:25:21.002 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 288], 00:25:21.002 | 99.99th=[ 309] 00:25:21.002 bw ( KiB/s): min=69632, max=359424, per=10.14%, avg=157300.10, stdev=82501.31, samples=20 00:25:21.002 iops : min= 272, max= 1404, avg=614.45, stdev=322.27, samples=20 00:25:21.002 lat (msec) : 4=0.27%, 10=1.03%, 20=2.01%, 50=20.35%, 100=41.28% 00:25:21.002 lat (msec) : 250=32.08%, 500=2.98% 00:25:21.002 cpu : usr=1.29%, sys=1.95%, ctx=2763, majf=0, minf=1 00:25:21.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:21.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.002 issued rwts: total=0,6207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.002 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.002 job3: (groupid=0, jobs=1): err= 0: pid=382027: Wed Apr 24 10:20:33 2024 00:25:21.002 write: IOPS=610, BW=153MiB/s (160MB/s)(1545MiB/10128msec); 0 zone resets 00:25:21.002 slat (usec): min=17, max=119409, avg=1307.00, stdev=3921.11 00:25:21.002 clat (usec): min=1586, max=300176, avg=103492.45, stdev=64085.77 00:25:21.002 lat (usec): min=1656, max=300235, avg=104799.46, stdev=64963.74 00:25:21.002 clat percentiles (msec): 00:25:21.002 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 23], 20.00th=[ 42], 00:25:21.002 | 30.00th=[ 63], 40.00th=[ 73], 50.00th=[ 99], 60.00th=[ 124], 00:25:21.002 | 70.00th=[ 133], 80.00th=[ 157], 90.00th=[ 199], 95.00th=[ 222], 00:25:21.002 | 99.00th=[ 264], 99.50th=[ 266], 99.90th=[ 288], 99.95th=[ 288], 00:25:21.002 | 99.99th=[ 300] 00:25:21.002 bw ( KiB/s): min=63488, max=379904, per=10.10%, avg=156637.20, stdev=79372.63, samples=20 00:25:21.002 iops : min= 248, max= 1484, avg=611.85, stdev=310.05, samples=20 00:25:21.002 lat (msec) : 2=0.10%, 4=0.45%, 10=2.85%, 20=4.84%, 50=17.80% 00:25:21.002 lat (msec) : 100=24.54%, 250=47.40%, 500=2.02% 00:25:21.002 cpu : usr=1.34%, sys=2.00%, ctx=3063, majf=0, minf=1 00:25:21.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:21.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.002 issued rwts: total=0,6181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.002 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.002 job4: (groupid=0, jobs=1): err= 0: pid=382029: Wed Apr 24 10:20:33 2024 00:25:21.002 write: IOPS=536, BW=134MiB/s (141MB/s)(1358MiB/10131msec); 0 zone resets 00:25:21.002 slat (usec): min=25, max=133850, avg=1564.43, stdev=4413.31 00:25:21.002 clat (usec): min=1251, max=403080, avg=117746.22, stdev=69491.60 00:25:21.002 lat (usec): min=1325, max=403137, avg=119310.66, stdev=70466.84 00:25:21.002 clat percentiles (msec): 00:25:21.002 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 30], 20.00th=[ 67], 00:25:21.002 | 30.00th=[ 79], 40.00th=[ 102], 50.00th=[ 111], 60.00th=[ 130], 00:25:21.002 | 70.00th=[ 136], 80.00th=[ 157], 90.00th=[ 220], 95.00th=[ 257], 00:25:21.002 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 401], 00:25:21.002 | 99.99th=[ 405] 00:25:21.002 bw ( KiB/s): min=43008, max=266752, per=8.86%, avg=137406.65, stdev=52906.83, samples=20 00:25:21.002 iops : min= 168, max= 1042, avg=536.70, stdev=206.69, samples=20 00:25:21.002 lat (msec) : 2=0.04%, 4=0.24%, 10=1.49%, 20=6.02%, 50=8.14% 00:25:21.002 lat (msec) : 100=22.43%, 250=56.14%, 500=5.51% 00:25:21.002 cpu : usr=1.24%, sys=1.73%, ctx=2453, majf=0, minf=1 00:25:21.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:21.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.002 issued rwts: total=0,5431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.002 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.002 job5: (groupid=0, jobs=1): err= 0: pid=382036: Wed Apr 24 10:20:33 2024 00:25:21.002 write: IOPS=638, BW=160MiB/s (168MB/s)(1618MiB/10130msec); 0 zone resets 00:25:21.002 slat (usec): min=15, max=48987, avg=1250.72, stdev=2811.65 00:25:21.002 clat (msec): min=2, max=292, avg=98.87, stdev=45.75 00:25:21.002 lat (msec): min=2, max=292, avg=100.12, stdev=46.27 00:25:21.002 clat percentiles (msec): 00:25:21.002 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 44], 20.00th=[ 54], 00:25:21.002 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 97], 60.00th=[ 109], 00:25:21.002 | 70.00th=[ 129], 80.00th=[ 140], 90.00th=[ 161], 95.00th=[ 174], 00:25:21.002 | 99.00th=[ 199], 99.50th=[ 230], 99.90th=[ 271], 99.95th=[ 279], 00:25:21.002 | 99.99th=[ 292] 00:25:21.002 bw ( KiB/s): min=98816, max=298496, per=10.58%, avg=164113.35, stdev=58290.71, samples=20 00:25:21.002 iops : min= 386, max= 1166, avg=641.05, stdev=227.69, samples=20 00:25:21.002 lat (msec) : 4=0.08%, 10=0.80%, 20=2.02%, 50=16.21%, 100=38.98% 00:25:21.002 lat (msec) : 250=41.70%, 500=0.22% 00:25:21.002 cpu : usr=1.59%, sys=1.95%, ctx=2727, majf=0, minf=1 00:25:21.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:21.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.002 issued rwts: total=0,6473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.002 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.002 job6: (groupid=0, jobs=1): err= 0: pid=382037: Wed Apr 24 10:20:33 2024 00:25:21.002 write: IOPS=492, BW=123MiB/s (129MB/s)(1250MiB/10149msec); 0 zone resets 00:25:21.002 slat (usec): min=23, max=52398, avg=1934.81, stdev=4354.56 00:25:21.002 clat (msec): min=3, max=319, avg=127.91, stdev=71.17 00:25:21.002 lat (msec): min=6, max=319, avg=129.85, stdev=72.17 00:25:21.002 clat percentiles (msec): 00:25:21.002 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 42], 00:25:21.002 | 30.00th=[ 73], 40.00th=[ 109], 50.00th=[ 129], 60.00th=[ 148], 00:25:21.002 | 70.00th=[ 167], 80.00th=[ 190], 90.00th=[ 228], 95.00th=[ 253], 00:25:21.002 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 317], 99.95th=[ 321], 00:25:21.002 | 99.99th=[ 321] 00:25:21.002 bw ( KiB/s): min=55296, max=402432, per=8.14%, avg=126348.70, stdev=84583.50, samples=20 00:25:21.002 iops : min= 216, max= 1572, avg=493.50, stdev=330.40, samples=20 00:25:21.002 lat (msec) : 4=0.02%, 10=0.62%, 20=0.04%, 50=24.62%, 100=9.00% 00:25:21.002 lat (msec) : 250=60.19%, 500=5.50% 00:25:21.002 cpu : usr=1.28%, sys=1.69%, ctx=1474, majf=0, minf=1 00:25:21.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:21.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.002 issued rwts: total=0,4999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.002 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.002 job7: (groupid=0, jobs=1): err= 0: pid=382038: Wed Apr 24 10:20:33 2024 00:25:21.002 write: IOPS=564, BW=141MiB/s (148MB/s)(1418MiB/10043msec); 0 zone resets 00:25:21.002 slat (usec): min=17, max=94195, avg=1307.46, stdev=4044.91 00:25:21.002 clat (msec): min=2, max=362, avg=111.93, stdev=73.37 00:25:21.002 lat (msec): min=2, max=362, avg=113.24, stdev=74.44 00:25:21.002 clat percentiles (msec): 00:25:21.002 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 55], 00:25:21.002 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 102], 60.00th=[ 115], 00:25:21.002 | 70.00th=[ 138], 80.00th=[ 161], 90.00th=[ 226], 95.00th=[ 262], 00:25:21.002 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 338], 99.95th=[ 359], 00:25:21.002 | 99.99th=[ 363] 00:25:21.002 bw ( KiB/s): min=54784, max=290816, per=9.26%, avg=143628.45, stdev=66146.92, samples=20 00:25:21.002 iops : min= 214, max= 1136, avg=561.00, stdev=258.40, samples=20 00:25:21.002 lat (msec) : 4=0.07%, 10=1.53%, 20=6.10%, 50=11.37%, 100=30.60% 00:25:21.002 lat (msec) : 250=44.63%, 500=5.69% 00:25:21.002 cpu : usr=1.32%, sys=1.88%, ctx=3137, majf=0, minf=1 00:25:21.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:21.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.002 issued rwts: total=0,5673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.002 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.002 job8: (groupid=0, jobs=1): err= 0: pid=382039: Wed Apr 24 10:20:33 2024 00:25:21.002 write: IOPS=555, BW=139MiB/s (146MB/s)(1408MiB/10140msec); 0 zone resets 00:25:21.002 slat (usec): min=19, max=87078, avg=1364.63, stdev=4589.32 00:25:21.002 clat (usec): min=1962, max=342887, avg=113789.38, stdev=77656.21 00:25:21.002 lat (msec): min=2, max=342, avg=115.15, stdev=78.60 00:25:21.002 clat percentiles (msec): 00:25:21.002 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 19], 20.00th=[ 39], 00:25:21.002 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 95], 60.00th=[ 130], 00:25:21.002 | 70.00th=[ 157], 80.00th=[ 182], 90.00th=[ 230], 95.00th=[ 257], 00:25:21.002 | 99.00th=[ 300], 99.50th=[ 305], 99.90th=[ 334], 99.95th=[ 334], 00:25:21.003 | 99.99th=[ 342] 00:25:21.003 bw ( KiB/s): min=55296, max=247808, per=9.19%, avg=142581.45, stdev=63388.13, samples=20 00:25:21.003 iops : min= 216, max= 968, avg=556.95, stdev=247.61, samples=20 00:25:21.003 lat (msec) : 2=0.02%, 4=0.32%, 10=4.60%, 20=6.23%, 50=11.56% 00:25:21.003 lat (msec) : 100=29.37%, 250=42.17%, 500=5.74% 00:25:21.003 cpu : usr=1.15%, sys=1.79%, ctx=2957, majf=0, minf=1 00:25:21.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:21.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.003 issued rwts: total=0,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.003 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.003 job9: (groupid=0, jobs=1): err= 0: pid=382040: Wed Apr 24 10:20:33 2024 00:25:21.003 write: IOPS=589, BW=147MiB/s (154MB/s)(1495MiB/10145msec); 0 zone resets 00:25:21.003 slat (usec): min=19, max=91147, avg=1308.59, stdev=3951.66 00:25:21.003 clat (msec): min=2, max=356, avg=107.24, stdev=70.17 00:25:21.003 lat (msec): min=3, max=357, avg=108.55, stdev=71.04 00:25:21.003 clat percentiles (msec): 00:25:21.003 | 1.00th=[ 8], 5.00th=[ 15], 10.00th=[ 23], 20.00th=[ 44], 00:25:21.003 | 30.00th=[ 70], 40.00th=[ 78], 50.00th=[ 104], 60.00th=[ 112], 00:25:21.003 | 70.00th=[ 133], 80.00th=[ 159], 90.00th=[ 203], 95.00th=[ 259], 00:25:21.003 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 338], 99.95th=[ 338], 00:25:21.003 | 99.99th=[ 359] 00:25:21.003 bw ( KiB/s): min=63488, max=387584, per=9.76%, avg=151449.60, stdev=75022.94, samples=20 00:25:21.003 iops : min= 248, max= 1514, avg=591.60, stdev=293.06, samples=20 00:25:21.003 lat (msec) : 4=0.15%, 10=1.62%, 20=6.37%, 50=17.29%, 100=21.99% 00:25:21.003 lat (msec) : 250=46.43%, 500=6.14% 00:25:21.003 cpu : usr=1.23%, sys=1.86%, ctx=3171, majf=0, minf=1 00:25:21.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:21.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.003 issued rwts: total=0,5979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.003 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.003 job10: (groupid=0, jobs=1): err= 0: pid=382041: Wed Apr 24 10:20:33 2024 00:25:21.003 write: IOPS=494, BW=124MiB/s (130MB/s)(1253MiB/10130msec); 0 zone resets 00:25:21.003 slat (usec): min=21, max=201369, avg=1457.98, stdev=5225.93 00:25:21.003 clat (msec): min=4, max=463, avg=127.62, stdev=68.70 00:25:21.003 lat (msec): min=4, max=473, avg=129.08, stdev=69.45 00:25:21.003 clat percentiles (msec): 00:25:21.003 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 46], 20.00th=[ 72], 00:25:21.003 | 30.00th=[ 95], 40.00th=[ 102], 50.00th=[ 124], 60.00th=[ 134], 00:25:21.003 | 70.00th=[ 144], 80.00th=[ 180], 90.00th=[ 209], 95.00th=[ 266], 00:25:21.003 | 99.00th=[ 342], 99.50th=[ 388], 99.90th=[ 456], 99.95th=[ 464], 00:25:21.003 | 99.99th=[ 464] 00:25:21.003 bw ( KiB/s): min=53248, max=223744, per=8.16%, avg=126643.20, stdev=40021.34, samples=20 00:25:21.003 iops : min= 208, max= 874, avg=494.70, stdev=156.33, samples=20 00:25:21.003 lat (msec) : 10=0.82%, 20=2.04%, 50=8.22%, 100=27.86%, 250=54.45% 00:25:21.003 lat (msec) : 500=6.61% 00:25:21.003 cpu : usr=0.96%, sys=1.60%, ctx=2723, majf=0, minf=1 00:25:21.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:21.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:21.003 issued rwts: total=0,5010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.003 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:21.003 00:25:21.003 Run status group 0 (all jobs): 00:25:21.003 WRITE: bw=1515MiB/s (1589MB/s), 99.4MiB/s-160MiB/s (104MB/s-168MB/s), io=15.0GiB (16.1GB), run=10043-10149msec 00:25:21.003 00:25:21.003 Disk stats (read/write): 00:25:21.003 nvme0n1: ios=49/11584, merge=0/0, ticks=52/1219163, in_queue=1219215, util=97.41% 00:25:21.003 nvme10n1: ios=43/7889, merge=0/0, ticks=50/1204027, in_queue=1204077, util=97.57% 00:25:21.003 nvme1n1: ios=45/12251, merge=0/0, ticks=1761/1210676, in_queue=1212437, util=100.00% 00:25:21.003 nvme2n1: ios=44/12182, merge=0/0, ticks=1988/1203014, in_queue=1205002, util=100.00% 00:25:21.003 nvme3n1: ios=42/10677, merge=0/0, ticks=1123/1210635, in_queue=1211758, util=100.00% 00:25:21.003 nvme4n1: ios=0/12765, merge=0/0, ticks=0/1212356, in_queue=1212356, util=98.14% 00:25:21.003 nvme5n1: ios=43/9820, merge=0/0, ticks=663/1201788, in_queue=1202451, util=100.00% 00:25:21.003 nvme6n1: ios=47/10995, merge=0/0, ticks=1386/1219253, in_queue=1220639, util=100.00% 00:25:21.003 nvme7n1: ios=50/11091, merge=0/0, ticks=2778/1193578, in_queue=1196356, util=100.00% 00:25:21.003 nvme8n1: ios=0/11776, merge=0/0, ticks=0/1209337, in_queue=1209337, util=98.91% 00:25:21.003 nvme9n1: ios=42/9840, merge=0/0, ticks=1707/1192231, in_queue=1193938, util=100.00% 00:25:21.003 10:20:33 -- target/multiconnection.sh@36 -- # sync 00:25:21.003 10:20:33 -- target/multiconnection.sh@37 -- # seq 1 11 00:25:21.003 10:20:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.003 10:20:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:21.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:21.003 10:20:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:21.003 10:20:33 -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.003 10:20:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:21.003 10:20:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:25:21.003 10:20:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:21.003 10:20:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:25:21.003 10:20:33 -- common/autotest_common.sh@1210 -- # return 0 00:25:21.003 10:20:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.003 10:20:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.003 10:20:33 -- common/autotest_common.sh@10 -- # set +x 00:25:21.003 10:20:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.003 10:20:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.003 10:20:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:21.003 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:21.003 10:20:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:21.003 10:20:34 -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.003 10:20:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:21.003 10:20:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:25:21.003 10:20:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:21.003 10:20:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:25:21.003 10:20:34 -- common/autotest_common.sh@1210 -- # return 0 00:25:21.003 10:20:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:21.003 10:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.003 10:20:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.003 10:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.003 10:20:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.003 10:20:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:21.272 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:21.272 10:20:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:21.272 10:20:34 -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.272 10:20:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:21.272 10:20:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:25:21.272 10:20:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:21.272 10:20:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:25:21.272 10:20:34 -- common/autotest_common.sh@1210 -- # return 0 00:25:21.272 10:20:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:21.272 10:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.272 10:20:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.272 10:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.272 10:20:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.272 10:20:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:21.531 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:21.531 10:20:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:21.531 10:20:34 -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.531 10:20:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:21.531 10:20:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:25:21.531 10:20:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:21.531 10:20:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:25:21.531 10:20:34 -- common/autotest_common.sh@1210 -- # return 0 00:25:21.531 10:20:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:21.531 10:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.531 10:20:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.531 10:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.531 10:20:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.531 10:20:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:21.790 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:21.790 10:20:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:21.790 10:20:35 -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.790 10:20:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:21.790 10:20:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:25:21.790 10:20:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:21.790 10:20:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:25:21.790 10:20:35 -- common/autotest_common.sh@1210 -- # return 0 00:25:21.790 10:20:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:21.790 10:20:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.790 10:20:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.048 10:20:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.048 10:20:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.048 10:20:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:22.048 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:22.048 10:20:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:22.048 10:20:35 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.048 10:20:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.048 10:20:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:25:22.048 10:20:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.048 10:20:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:25:22.307 10:20:35 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.307 10:20:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:22.307 10:20:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.307 10:20:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.307 10:20:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.307 10:20:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.307 10:20:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:22.565 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:22.565 10:20:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:22.565 10:20:35 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.565 10:20:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.565 10:20:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:25:22.565 10:20:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:25:22.565 10:20:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.565 10:20:35 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.565 10:20:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:22.565 10:20:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.565 10:20:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.565 10:20:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.565 10:20:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.565 10:20:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:22.565 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:22.565 10:20:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:22.565 10:20:35 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.565 10:20:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.565 10:20:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:25:22.565 10:20:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.565 10:20:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:25:22.565 10:20:35 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.565 10:20:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:22.565 10:20:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.566 10:20:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.566 10:20:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.566 10:20:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.566 10:20:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:22.824 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:22.824 10:20:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:22.824 10:20:35 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.824 10:20:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.824 10:20:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:25:22.824 10:20:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.824 10:20:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:25:22.824 10:20:35 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.824 10:20:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:22.824 10:20:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.824 10:20:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.824 10:20:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.824 10:20:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.824 10:20:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:22.824 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:22.824 10:20:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:22.824 10:20:36 -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.824 10:20:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:22.824 10:20:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:25:22.824 10:20:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:25:22.824 10:20:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:22.824 10:20:36 -- common/autotest_common.sh@1210 -- # return 0 00:25:22.824 10:20:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:22.824 10:20:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.824 10:20:36 -- common/autotest_common.sh@10 -- # set +x 00:25:22.824 10:20:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.824 10:20:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.824 10:20:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:23.083 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:23.083 10:20:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:23.083 10:20:36 -- common/autotest_common.sh@1198 -- # local i=0 00:25:23.083 10:20:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:23.083 10:20:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:25:23.083 10:20:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:23.083 10:20:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:25:23.083 10:20:36 -- common/autotest_common.sh@1210 -- # return 0 00:25:23.083 10:20:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:23.083 10:20:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.083 10:20:36 -- common/autotest_common.sh@10 -- # set +x 00:25:23.083 10:20:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.083 10:20:36 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:23.083 10:20:36 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:23.083 10:20:36 -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:23.083 10:20:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:23.083 10:20:36 -- nvmf/common.sh@116 -- # sync 00:25:23.083 10:20:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:23.083 10:20:36 -- nvmf/common.sh@119 -- # set +e 00:25:23.083 10:20:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:23.083 10:20:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:23.083 rmmod nvme_tcp 00:25:23.083 rmmod nvme_fabrics 00:25:23.083 rmmod nvme_keyring 00:25:23.083 10:20:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:23.083 10:20:36 -- nvmf/common.sh@123 -- # set -e 00:25:23.083 10:20:36 -- nvmf/common.sh@124 -- # return 0 00:25:23.083 10:20:36 -- nvmf/common.sh@477 -- # '[' -n 373656 ']' 00:25:23.083 10:20:36 -- nvmf/common.sh@478 -- # killprocess 373656 00:25:23.083 10:20:36 -- common/autotest_common.sh@926 -- # '[' -z 373656 ']' 00:25:23.083 10:20:36 -- common/autotest_common.sh@930 -- # kill -0 373656 00:25:23.083 10:20:36 -- common/autotest_common.sh@931 -- # uname 00:25:23.083 10:20:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:23.083 10:20:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 373656 00:25:23.083 10:20:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:23.083 10:20:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:23.083 10:20:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 373656' 00:25:23.083 killing process with pid 373656 00:25:23.083 10:20:36 -- common/autotest_common.sh@945 -- # kill 373656 00:25:23.083 10:20:36 -- common/autotest_common.sh@950 -- # wait 373656 00:25:23.650 10:20:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:23.650 10:20:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:23.650 10:20:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:23.650 10:20:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:23.650 10:20:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:23.650 10:20:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.650 10:20:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.650 10:20:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.560 10:20:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:25.819 00:25:25.819 real 1m10.377s 00:25:25.819 user 4m11.091s 00:25:25.819 sys 0m23.634s 00:25:25.819 10:20:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.819 10:20:38 -- common/autotest_common.sh@10 -- # set +x 00:25:25.819 ************************************ 00:25:25.819 END TEST nvmf_multiconnection 00:25:25.819 ************************************ 00:25:25.819 10:20:38 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:25.819 10:20:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:25.819 10:20:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:25.819 10:20:38 -- common/autotest_common.sh@10 -- # set +x 00:25:25.819 ************************************ 00:25:25.819 START TEST nvmf_initiator_timeout 00:25:25.819 ************************************ 00:25:25.819 10:20:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:25.819 * Looking for test storage... 00:25:25.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.819 10:20:38 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.819 10:20:38 -- nvmf/common.sh@7 -- # uname -s 00:25:25.819 10:20:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.819 10:20:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.819 10:20:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.819 10:20:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.819 10:20:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.819 10:20:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.819 10:20:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.819 10:20:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.819 10:20:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.819 10:20:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.819 10:20:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:25.819 10:20:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:25.819 10:20:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.819 10:20:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.819 10:20:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.819 10:20:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.819 10:20:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.819 10:20:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.819 10:20:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.819 10:20:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.819 10:20:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.819 10:20:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.819 10:20:38 -- paths/export.sh@5 -- # export PATH 00:25:25.819 10:20:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.819 10:20:38 -- nvmf/common.sh@46 -- # : 0 00:25:25.819 10:20:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:25.819 10:20:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:25.819 10:20:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:25.819 10:20:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.819 10:20:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.819 10:20:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:25.819 10:20:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:25.819 10:20:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:25.819 10:20:38 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:25.819 10:20:38 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:25.819 10:20:38 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:25.819 10:20:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:25.819 10:20:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.819 10:20:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:25.819 10:20:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:25.819 10:20:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:25.819 10:20:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.819 10:20:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.819 10:20:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.819 10:20:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:25.819 10:20:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:25.819 10:20:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:25.819 10:20:38 -- common/autotest_common.sh@10 -- # set +x 00:25:31.085 10:20:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:31.085 10:20:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:31.085 10:20:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:31.085 10:20:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:31.085 10:20:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:31.085 10:20:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:31.085 10:20:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:31.085 10:20:44 -- nvmf/common.sh@294 -- # net_devs=() 00:25:31.085 10:20:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:31.085 10:20:44 -- nvmf/common.sh@295 -- # e810=() 00:25:31.085 10:20:44 -- nvmf/common.sh@295 -- # local -ga e810 00:25:31.085 10:20:44 -- nvmf/common.sh@296 -- # x722=() 00:25:31.085 10:20:44 -- nvmf/common.sh@296 -- # local -ga x722 00:25:31.085 10:20:44 -- nvmf/common.sh@297 -- # mlx=() 00:25:31.085 10:20:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:31.085 10:20:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.085 10:20:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:31.085 10:20:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:31.085 10:20:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:31.085 10:20:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:31.085 10:20:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:31.085 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:31.085 10:20:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:31.085 10:20:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:31.085 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:31.085 10:20:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:31.085 10:20:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:31.085 10:20:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:31.085 10:20:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.085 10:20:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:31.085 10:20:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.085 10:20:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:31.085 Found net devices under 0000:86:00.0: cvl_0_0 00:25:31.085 10:20:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.085 10:20:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:31.085 10:20:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.085 10:20:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:31.085 10:20:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.086 10:20:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:31.086 Found net devices under 0000:86:00.1: cvl_0_1 00:25:31.086 10:20:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.086 10:20:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:31.086 10:20:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:31.086 10:20:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:31.086 10:20:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:31.086 10:20:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:31.086 10:20:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.086 10:20:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.086 10:20:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.086 10:20:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:31.086 10:20:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.086 10:20:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.086 10:20:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:31.086 10:20:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.086 10:20:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.086 10:20:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:31.086 10:20:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:31.086 10:20:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.086 10:20:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.086 10:20:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.086 10:20:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.086 10:20:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:31.086 10:20:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.086 10:20:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.086 10:20:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.086 10:20:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:31.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:25:31.086 00:25:31.086 --- 10.0.0.2 ping statistics --- 00:25:31.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.086 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:31.086 10:20:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:25:31.344 00:25:31.344 --- 10.0.0.1 ping statistics --- 00:25:31.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.345 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:25:31.345 10:20:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.345 10:20:44 -- nvmf/common.sh@410 -- # return 0 00:25:31.345 10:20:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:31.345 10:20:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.345 10:20:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:31.345 10:20:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:31.345 10:20:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.345 10:20:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:31.345 10:20:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:31.345 10:20:44 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:31.345 10:20:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:31.345 10:20:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:31.345 10:20:44 -- common/autotest_common.sh@10 -- # set +x 00:25:31.345 10:20:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:31.345 10:20:44 -- nvmf/common.sh@469 -- # nvmfpid=387450 00:25:31.345 10:20:44 -- nvmf/common.sh@470 -- # waitforlisten 387450 00:25:31.345 10:20:44 -- common/autotest_common.sh@819 -- # '[' -z 387450 ']' 00:25:31.345 10:20:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.345 10:20:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:31.345 10:20:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.345 10:20:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:31.345 10:20:44 -- common/autotest_common.sh@10 -- # set +x 00:25:31.345 [2024-04-24 10:20:44.439671] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:25:31.345 [2024-04-24 10:20:44.439716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.345 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.345 [2024-04-24 10:20:44.497576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.345 [2024-04-24 10:20:44.580756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:31.345 [2024-04-24 10:20:44.580861] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.345 [2024-04-24 10:20:44.580869] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.345 [2024-04-24 10:20:44.580876] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.345 [2024-04-24 10:20:44.580918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.345 [2024-04-24 10:20:44.581018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.345 [2024-04-24 10:20:44.581104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.345 [2024-04-24 10:20:44.581106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.278 10:20:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:32.278 10:20:45 -- common/autotest_common.sh@852 -- # return 0 00:25:32.278 10:20:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:32.278 10:20:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:32.278 10:20:45 -- common/autotest_common.sh@10 -- # set +x 00:25:32.278 10:20:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.278 10:20:45 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:32.278 10:20:45 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:32.278 10:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.278 10:20:45 -- common/autotest_common.sh@10 -- # set +x 00:25:32.278 Malloc0 00:25:32.278 10:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.278 10:20:45 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:32.278 10:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.278 10:20:45 -- common/autotest_common.sh@10 -- # set +x 00:25:32.278 Delay0 00:25:32.278 10:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.278 10:20:45 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:32.278 10:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.278 10:20:45 -- common/autotest_common.sh@10 -- # set +x 00:25:32.278 [2024-04-24 10:20:45.317604] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.278 10:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.278 10:20:45 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:32.278 10:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.278 10:20:45 -- common/autotest_common.sh@10 -- # set +x 00:25:32.278 10:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.278 10:20:45 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:32.278 10:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.278 10:20:45 -- common/autotest_common.sh@10 -- # set +x 00:25:32.278 10:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.278 10:20:45 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.278 10:20:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.278 10:20:45 -- common/autotest_common.sh@10 -- # set +x 00:25:32.278 [2024-04-24 10:20:45.342565] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.278 10:20:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.278 10:20:45 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:33.651 10:20:46 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:33.651 10:20:46 -- common/autotest_common.sh@1177 -- # local i=0 00:25:33.651 10:20:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:25:33.651 10:20:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:25:33.651 10:20:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:25:35.548 10:20:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:25:35.548 10:20:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:25:35.549 10:20:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:25:35.549 10:20:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:25:35.549 10:20:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.549 10:20:48 -- common/autotest_common.sh@1187 -- # return 0 00:25:35.549 10:20:48 -- target/initiator_timeout.sh@35 -- # fio_pid=388101 00:25:35.549 10:20:48 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:35.549 10:20:48 -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:35.549 [global] 00:25:35.549 thread=1 00:25:35.549 invalidate=1 00:25:35.549 rw=write 00:25:35.549 time_based=1 00:25:35.549 runtime=60 00:25:35.549 ioengine=libaio 00:25:35.549 direct=1 00:25:35.549 bs=4096 00:25:35.549 iodepth=1 00:25:35.549 norandommap=0 00:25:35.549 numjobs=1 00:25:35.549 00:25:35.549 verify_dump=1 00:25:35.549 verify_backlog=512 00:25:35.549 verify_state_save=0 00:25:35.549 do_verify=1 00:25:35.549 verify=crc32c-intel 00:25:35.549 [job0] 00:25:35.549 filename=/dev/nvme0n1 00:25:35.549 Could not set queue depth (nvme0n1) 00:25:35.806 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:35.806 fio-3.35 00:25:35.806 Starting 1 thread 00:25:38.333 10:20:51 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:38.333 10:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.333 10:20:51 -- common/autotest_common.sh@10 -- # set +x 00:25:38.333 true 00:25:38.333 10:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.333 10:20:51 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:38.333 10:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.333 10:20:51 -- common/autotest_common.sh@10 -- # set +x 00:25:38.333 true 00:25:38.333 10:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.333 10:20:51 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:38.333 10:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.333 10:20:51 -- common/autotest_common.sh@10 -- # set +x 00:25:38.333 true 00:25:38.333 10:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.333 10:20:51 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:38.333 10:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.333 10:20:51 -- common/autotest_common.sh@10 -- # set +x 00:25:38.333 true 00:25:38.333 10:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.333 10:20:51 -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:41.611 10:20:54 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:41.611 10:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.611 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:41.611 true 00:25:41.611 10:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.611 10:20:54 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:41.611 10:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.611 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:41.611 true 00:25:41.611 10:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.611 10:20:54 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:41.611 10:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.611 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:41.611 true 00:25:41.611 10:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.611 10:20:54 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:41.611 10:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.611 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:41.611 true 00:25:41.611 10:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.611 10:20:54 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:41.611 10:20:54 -- target/initiator_timeout.sh@54 -- # wait 388101 00:26:37.820 00:26:37.820 job0: (groupid=0, jobs=1): err= 0: pid=388362: Wed Apr 24 10:21:49 2024 00:26:37.820 read: IOPS=124, BW=497KiB/s (509kB/s)(29.1MiB/60034msec) 00:26:37.820 slat (usec): min=6, max=10489, avg=11.49, stdev=156.16 00:26:37.820 clat (usec): min=248, max=41353k, avg=7748.51, stdev=478959.25 00:26:37.820 lat (usec): min=255, max=41353k, avg=7760.00, stdev=478959.47 00:26:37.820 clat percentiles (usec): 00:26:37.820 | 1.00th=[ 297], 5.00th=[ 363], 10.00th=[ 367], 00:26:37.820 | 20.00th=[ 375], 30.00th=[ 379], 40.00th=[ 383], 00:26:37.820 | 50.00th=[ 388], 60.00th=[ 392], 70.00th=[ 396], 00:26:37.820 | 80.00th=[ 400], 90.00th=[ 453], 95.00th=[ 523], 00:26:37.820 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:37.820 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:37.820 write: IOPS=127, BW=512KiB/s (524kB/s)(30.0MiB/60034msec); 0 zone resets 00:26:37.820 slat (usec): min=9, max=26983, avg=15.32, stdev=307.77 00:26:37.820 clat (usec): min=227, max=457, avg=261.41, stdev=14.85 00:26:37.820 lat (usec): min=237, max=27313, avg=276.73, stdev=308.92 00:26:37.820 clat percentiles (usec): 00:26:37.820 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:26:37.820 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 260], 60.00th=[ 265], 00:26:37.820 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:26:37.820 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 371], 99.95th=[ 400], 00:26:37.820 | 99.99th=[ 457] 00:26:37.820 bw ( KiB/s): min= 3424, max= 8000, per=100.00%, avg=5585.45, stdev=1546.04, samples=11 00:26:37.820 iops : min= 856, max= 2000, avg=1396.36, stdev=386.51, samples=11 00:26:37.820 lat (usec) : 250=9.82%, 500=86.27%, 750=1.70% 00:26:37.820 lat (msec) : 2=0.01%, 50=2.19%, >=2000=0.01% 00:26:37.820 cpu : usr=0.22%, sys=0.38%, ctx=15144, majf=0, minf=2 00:26:37.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.820 issued rwts: total=7456,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:37.820 00:26:37.820 Run status group 0 (all jobs): 00:26:37.820 READ: bw=497KiB/s (509kB/s), 497KiB/s-497KiB/s (509kB/s-509kB/s), io=29.1MiB (30.5MB), run=60034-60034msec 00:26:37.820 WRITE: bw=512KiB/s (524kB/s), 512KiB/s-512KiB/s (524kB/s-524kB/s), io=30.0MiB (31.5MB), run=60034-60034msec 00:26:37.820 00:26:37.820 Disk stats (read/write): 00:26:37.820 nvme0n1: ios=7504/7680, merge=0/0, ticks=17643/1956, in_queue=19599, util=99.89% 00:26:37.820 10:21:49 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:37.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:37.820 10:21:49 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:37.820 10:21:49 -- common/autotest_common.sh@1198 -- # local i=0 00:26:37.820 10:21:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:26:37.820 10:21:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:37.820 10:21:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:37.820 10:21:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:37.820 10:21:49 -- common/autotest_common.sh@1210 -- # return 0 00:26:37.820 10:21:49 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:37.820 10:21:49 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:37.820 nvmf hotplug test: fio successful as expected 00:26:37.820 10:21:49 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:37.820 10:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:37.820 10:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:37.820 10:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:37.820 10:21:49 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:37.820 10:21:49 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:37.820 10:21:49 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:37.820 10:21:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:37.820 10:21:49 -- nvmf/common.sh@116 -- # sync 00:26:37.820 10:21:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:37.820 10:21:49 -- nvmf/common.sh@119 -- # set +e 00:26:37.820 10:21:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:37.820 10:21:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:37.820 rmmod nvme_tcp 00:26:37.820 rmmod nvme_fabrics 00:26:37.820 rmmod nvme_keyring 00:26:37.820 10:21:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:37.820 10:21:49 -- nvmf/common.sh@123 -- # set -e 00:26:37.820 10:21:49 -- nvmf/common.sh@124 -- # return 0 00:26:37.820 10:21:49 -- nvmf/common.sh@477 -- # '[' -n 387450 ']' 00:26:37.820 10:21:49 -- nvmf/common.sh@478 -- # killprocess 387450 00:26:37.820 10:21:49 -- common/autotest_common.sh@926 -- # '[' -z 387450 ']' 00:26:37.820 10:21:49 -- common/autotest_common.sh@930 -- # kill -0 387450 00:26:37.820 10:21:49 -- common/autotest_common.sh@931 -- # uname 00:26:37.820 10:21:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:37.821 10:21:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 387450 00:26:37.821 10:21:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:37.821 10:21:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:37.821 10:21:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 387450' 00:26:37.821 killing process with pid 387450 00:26:37.821 10:21:49 -- common/autotest_common.sh@945 -- # kill 387450 00:26:37.821 10:21:49 -- common/autotest_common.sh@950 -- # wait 387450 00:26:37.821 10:21:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:37.821 10:21:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:37.821 10:21:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:37.821 10:21:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.821 10:21:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:37.821 10:21:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.821 10:21:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.821 10:21:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.388 10:21:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:38.388 00:26:38.388 real 1m12.675s 00:26:38.388 user 4m24.590s 00:26:38.388 sys 0m6.121s 00:26:38.388 10:21:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.388 10:21:51 -- common/autotest_common.sh@10 -- # set +x 00:26:38.388 ************************************ 00:26:38.388 END TEST nvmf_initiator_timeout 00:26:38.388 ************************************ 00:26:38.388 10:21:51 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:26:38.388 10:21:51 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:26:38.388 10:21:51 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:26:38.388 10:21:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:38.388 10:21:51 -- common/autotest_common.sh@10 -- # set +x 00:26:43.658 10:21:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:43.658 10:21:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:43.658 10:21:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:43.658 10:21:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:43.658 10:21:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:43.658 10:21:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:43.658 10:21:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:43.658 10:21:56 -- nvmf/common.sh@294 -- # net_devs=() 00:26:43.658 10:21:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:43.658 10:21:56 -- nvmf/common.sh@295 -- # e810=() 00:26:43.658 10:21:56 -- nvmf/common.sh@295 -- # local -ga e810 00:26:43.658 10:21:56 -- nvmf/common.sh@296 -- # x722=() 00:26:43.658 10:21:56 -- nvmf/common.sh@296 -- # local -ga x722 00:26:43.658 10:21:56 -- nvmf/common.sh@297 -- # mlx=() 00:26:43.658 10:21:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:43.658 10:21:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.658 10:21:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:43.658 10:21:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:43.658 10:21:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:43.658 10:21:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:43.658 10:21:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:43.658 10:21:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:43.658 10:21:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:43.658 10:21:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:43.658 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:43.658 10:21:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:43.658 10:21:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:43.658 10:21:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.658 10:21:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.658 10:21:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:43.658 10:21:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:43.659 10:21:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:43.659 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:43.659 10:21:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:43.659 10:21:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:43.659 10:21:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.659 10:21:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.659 10:21:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:43.659 10:21:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:43.659 10:21:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:43.659 10:21:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:43.659 10:21:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:43.659 10:21:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.659 10:21:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:43.659 10:21:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.659 10:21:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:43.659 Found net devices under 0000:86:00.0: cvl_0_0 00:26:43.659 10:21:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.659 10:21:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:43.659 10:21:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.659 10:21:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:43.659 10:21:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.659 10:21:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:43.659 Found net devices under 0000:86:00.1: cvl_0_1 00:26:43.659 10:21:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.659 10:21:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:43.659 10:21:56 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.659 10:21:56 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:26:43.659 10:21:56 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:43.659 10:21:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:43.659 10:21:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:43.659 10:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:43.659 ************************************ 00:26:43.659 START TEST nvmf_perf_adq 00:26:43.659 ************************************ 00:26:43.659 10:21:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:43.659 * Looking for test storage... 00:26:43.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:43.659 10:21:56 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.659 10:21:56 -- nvmf/common.sh@7 -- # uname -s 00:26:43.659 10:21:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.659 10:21:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.659 10:21:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.659 10:21:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.659 10:21:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.659 10:21:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.659 10:21:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.659 10:21:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.659 10:21:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.659 10:21:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.659 10:21:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:43.659 10:21:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:43.659 10:21:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.659 10:21:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.659 10:21:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.659 10:21:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.659 10:21:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.659 10:21:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.659 10:21:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.659 10:21:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.659 10:21:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.659 10:21:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.659 10:21:56 -- paths/export.sh@5 -- # export PATH 00:26:43.659 10:21:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.659 10:21:56 -- nvmf/common.sh@46 -- # : 0 00:26:43.659 10:21:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:43.659 10:21:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:43.659 10:21:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:43.659 10:21:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.659 10:21:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.659 10:21:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:43.659 10:21:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:43.659 10:21:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:43.659 10:21:56 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:43.659 10:21:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:43.659 10:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:47.842 10:22:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:47.842 10:22:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:47.842 10:22:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:47.842 10:22:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:47.842 10:22:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:47.842 10:22:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:47.842 10:22:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:47.842 10:22:01 -- nvmf/common.sh@294 -- # net_devs=() 00:26:47.842 10:22:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:47.842 10:22:01 -- nvmf/common.sh@295 -- # e810=() 00:26:47.842 10:22:01 -- nvmf/common.sh@295 -- # local -ga e810 00:26:47.842 10:22:01 -- nvmf/common.sh@296 -- # x722=() 00:26:47.842 10:22:01 -- nvmf/common.sh@296 -- # local -ga x722 00:26:47.842 10:22:01 -- nvmf/common.sh@297 -- # mlx=() 00:26:47.842 10:22:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:47.842 10:22:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.842 10:22:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:47.842 10:22:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:47.842 10:22:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:47.842 10:22:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:47.842 10:22:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:47.842 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:47.842 10:22:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:47.842 10:22:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:47.842 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:47.842 10:22:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:47.842 10:22:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:47.842 10:22:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:47.842 10:22:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.842 10:22:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:47.842 10:22:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.842 10:22:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:47.842 Found net devices under 0000:86:00.0: cvl_0_0 00:26:47.842 10:22:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.842 10:22:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:47.842 10:22:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.843 10:22:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:47.843 10:22:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.843 10:22:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:47.843 Found net devices under 0000:86:00.1: cvl_0_1 00:26:47.843 10:22:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.843 10:22:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:47.843 10:22:01 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.843 10:22:01 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:47.843 10:22:01 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:47.843 10:22:01 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:26:47.843 10:22:01 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:49.215 10:22:02 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:51.120 10:22:04 -- target/perf_adq.sh@54 -- # sleep 5 00:26:56.467 10:22:09 -- target/perf_adq.sh@67 -- # nvmftestinit 00:26:56.467 10:22:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:56.467 10:22:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.467 10:22:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:56.467 10:22:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:56.467 10:22:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:56.467 10:22:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.467 10:22:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.467 10:22:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.467 10:22:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:56.467 10:22:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:56.467 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:26:56.467 10:22:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:56.467 10:22:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:56.467 10:22:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:56.467 10:22:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:56.467 10:22:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:56.467 10:22:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:56.467 10:22:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:56.467 10:22:09 -- nvmf/common.sh@294 -- # net_devs=() 00:26:56.467 10:22:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:56.467 10:22:09 -- nvmf/common.sh@295 -- # e810=() 00:26:56.467 10:22:09 -- nvmf/common.sh@295 -- # local -ga e810 00:26:56.467 10:22:09 -- nvmf/common.sh@296 -- # x722=() 00:26:56.467 10:22:09 -- nvmf/common.sh@296 -- # local -ga x722 00:26:56.467 10:22:09 -- nvmf/common.sh@297 -- # mlx=() 00:26:56.467 10:22:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:56.467 10:22:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.467 10:22:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:56.467 10:22:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:56.467 10:22:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:56.467 10:22:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:56.467 10:22:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:56.467 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:56.467 10:22:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:56.467 10:22:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:56.467 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:56.467 10:22:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:56.467 10:22:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:56.467 10:22:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.467 10:22:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:56.467 10:22:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.467 10:22:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:56.467 Found net devices under 0000:86:00.0: cvl_0_0 00:26:56.467 10:22:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.467 10:22:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:56.467 10:22:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.467 10:22:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:56.467 10:22:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.467 10:22:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:56.467 Found net devices under 0000:86:00.1: cvl_0_1 00:26:56.467 10:22:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.467 10:22:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:56.467 10:22:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:56.467 10:22:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:56.467 10:22:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:56.467 10:22:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.467 10:22:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.467 10:22:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.467 10:22:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:56.467 10:22:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.467 10:22:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.468 10:22:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:56.468 10:22:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.468 10:22:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.468 10:22:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:56.468 10:22:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:56.468 10:22:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.468 10:22:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.468 10:22:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.468 10:22:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.468 10:22:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:56.468 10:22:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.468 10:22:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.468 10:22:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.468 10:22:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:56.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:26:56.468 00:26:56.468 --- 10.0.0.2 ping statistics --- 00:26:56.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.468 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:26:56.468 10:22:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:26:56.468 00:26:56.468 --- 10.0.0.1 ping statistics --- 00:26:56.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.468 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:26:56.468 10:22:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.468 10:22:09 -- nvmf/common.sh@410 -- # return 0 00:26:56.468 10:22:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:56.468 10:22:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.468 10:22:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:56.468 10:22:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:56.468 10:22:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.468 10:22:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:56.468 10:22:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:56.468 10:22:09 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:56.468 10:22:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:56.468 10:22:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:56.468 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:26:56.468 10:22:09 -- nvmf/common.sh@469 -- # nvmfpid=406454 00:26:56.468 10:22:09 -- nvmf/common.sh@470 -- # waitforlisten 406454 00:26:56.468 10:22:09 -- common/autotest_common.sh@819 -- # '[' -z 406454 ']' 00:26:56.468 10:22:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.468 10:22:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:56.468 10:22:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.468 10:22:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:56.468 10:22:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:56.468 10:22:09 -- common/autotest_common.sh@10 -- # set +x 00:26:56.468 [2024-04-24 10:22:09.520629] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:26:56.468 [2024-04-24 10:22:09.520677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.468 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.468 [2024-04-24 10:22:09.579430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:56.468 [2024-04-24 10:22:09.658437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:56.468 [2024-04-24 10:22:09.658546] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.468 [2024-04-24 10:22:09.658554] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.468 [2024-04-24 10:22:09.658560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.468 [2024-04-24 10:22:09.658602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.468 [2024-04-24 10:22:09.658695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.468 [2024-04-24 10:22:09.658711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:56.468 [2024-04-24 10:22:09.658712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.402 10:22:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:57.402 10:22:10 -- common/autotest_common.sh@852 -- # return 0 00:26:57.402 10:22:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:57.402 10:22:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:57.402 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:57.402 10:22:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.402 10:22:10 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:26:57.402 10:22:10 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:57.402 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.402 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:57.402 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.402 10:22:10 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:57.402 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.402 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:57.402 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.402 10:22:10 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:57.402 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.402 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:57.402 [2024-04-24 10:22:10.482818] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.402 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.402 10:22:10 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:57.402 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.402 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:57.402 Malloc1 00:26:57.402 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.402 10:22:10 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:57.402 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.402 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:57.402 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.402 10:22:10 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:57.402 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.402 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:57.402 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.402 10:22:10 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.402 10:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.402 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:26:57.402 [2024-04-24 10:22:10.530483] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.402 10:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.402 10:22:10 -- target/perf_adq.sh@73 -- # perfpid=406690 00:26:57.402 10:22:10 -- target/perf_adq.sh@74 -- # sleep 2 00:26:57.402 10:22:10 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:57.402 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.302 10:22:12 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:26:59.302 10:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.302 10:22:12 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:59.302 10:22:12 -- common/autotest_common.sh@10 -- # set +x 00:26:59.302 10:22:12 -- target/perf_adq.sh@76 -- # wc -l 00:26:59.302 10:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.557 10:22:12 -- target/perf_adq.sh@76 -- # count=4 00:26:59.557 10:22:12 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:26:59.557 10:22:12 -- target/perf_adq.sh@81 -- # wait 406690 00:27:07.660 Initializing NVMe Controllers 00:27:07.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:07.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:07.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:07.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:07.660 Initialization complete. Launching workers. 00:27:07.660 ======================================================== 00:27:07.660 Latency(us) 00:27:07.660 Device Information : IOPS MiB/s Average min max 00:27:07.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10862.70 42.43 5891.54 979.19 9564.50 00:27:07.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11733.50 45.83 5454.68 1032.45 10475.09 00:27:07.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10713.00 41.85 5974.12 1113.47 10997.98 00:27:07.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10652.00 41.61 6027.49 1099.48 46192.59 00:27:07.660 ======================================================== 00:27:07.660 Total : 43961.20 171.72 5828.00 979.19 46192.59 00:27:07.660 00:27:07.660 10:22:20 -- target/perf_adq.sh@82 -- # nvmftestfini 00:27:07.660 10:22:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:07.660 10:22:20 -- nvmf/common.sh@116 -- # sync 00:27:07.660 10:22:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:07.660 10:22:20 -- nvmf/common.sh@119 -- # set +e 00:27:07.660 10:22:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:07.660 10:22:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:07.660 rmmod nvme_tcp 00:27:07.660 rmmod nvme_fabrics 00:27:07.660 rmmod nvme_keyring 00:27:07.660 10:22:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:07.660 10:22:20 -- nvmf/common.sh@123 -- # set -e 00:27:07.660 10:22:20 -- nvmf/common.sh@124 -- # return 0 00:27:07.660 10:22:20 -- nvmf/common.sh@477 -- # '[' -n 406454 ']' 00:27:07.660 10:22:20 -- nvmf/common.sh@478 -- # killprocess 406454 00:27:07.660 10:22:20 -- common/autotest_common.sh@926 -- # '[' -z 406454 ']' 00:27:07.660 10:22:20 -- common/autotest_common.sh@930 -- # kill -0 406454 00:27:07.660 10:22:20 -- common/autotest_common.sh@931 -- # uname 00:27:07.660 10:22:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:07.660 10:22:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 406454 00:27:07.660 10:22:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:07.660 10:22:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:07.660 10:22:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 406454' 00:27:07.660 killing process with pid 406454 00:27:07.660 10:22:20 -- common/autotest_common.sh@945 -- # kill 406454 00:27:07.660 10:22:20 -- common/autotest_common.sh@950 -- # wait 406454 00:27:07.919 10:22:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:07.919 10:22:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:07.919 10:22:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:07.919 10:22:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.919 10:22:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:07.919 10:22:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.919 10:22:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.919 10:22:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.465 10:22:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:10.465 10:22:23 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:27:10.465 10:22:23 -- target/perf_adq.sh@52 -- # rmmod ice 00:27:11.031 10:22:24 -- target/perf_adq.sh@53 -- # modprobe ice 00:27:13.562 10:22:26 -- target/perf_adq.sh@54 -- # sleep 5 00:27:18.834 10:22:31 -- target/perf_adq.sh@87 -- # nvmftestinit 00:27:18.834 10:22:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:18.834 10:22:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.834 10:22:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:18.834 10:22:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:18.834 10:22:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:18.834 10:22:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.834 10:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.834 10:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.834 10:22:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:18.834 10:22:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:18.834 10:22:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.834 10:22:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:18.834 10:22:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:18.834 10:22:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:18.834 10:22:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:18.834 10:22:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:18.834 10:22:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:18.834 10:22:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:18.834 10:22:31 -- nvmf/common.sh@294 -- # net_devs=() 00:27:18.834 10:22:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:18.834 10:22:31 -- nvmf/common.sh@295 -- # e810=() 00:27:18.834 10:22:31 -- nvmf/common.sh@295 -- # local -ga e810 00:27:18.834 10:22:31 -- nvmf/common.sh@296 -- # x722=() 00:27:18.834 10:22:31 -- nvmf/common.sh@296 -- # local -ga x722 00:27:18.834 10:22:31 -- nvmf/common.sh@297 -- # mlx=() 00:27:18.834 10:22:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:18.834 10:22:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.834 10:22:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:18.834 10:22:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:18.834 10:22:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:18.834 10:22:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.834 10:22:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:18.834 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:18.834 10:22:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.834 10:22:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:18.834 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:18.834 10:22:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:18.834 10:22:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.834 10:22:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.834 10:22:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.834 10:22:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.834 10:22:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:18.834 Found net devices under 0000:86:00.0: cvl_0_0 00:27:18.834 10:22:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.834 10:22:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.834 10:22:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.834 10:22:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.834 10:22:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.834 10:22:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:18.834 Found net devices under 0000:86:00.1: cvl_0_1 00:27:18.834 10:22:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.834 10:22:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:18.834 10:22:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:18.834 10:22:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:18.834 10:22:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.834 10:22:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.834 10:22:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.834 10:22:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:18.834 10:22:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.834 10:22:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.834 10:22:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:18.834 10:22:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.834 10:22:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.834 10:22:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:18.834 10:22:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:18.834 10:22:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.834 10:22:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.834 10:22:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.834 10:22:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.834 10:22:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:18.834 10:22:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.834 10:22:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.834 10:22:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.834 10:22:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:18.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:27:18.834 00:27:18.834 --- 10.0.0.2 ping statistics --- 00:27:18.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.834 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:27:18.834 10:22:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:27:18.834 00:27:18.834 --- 10.0.0.1 ping statistics --- 00:27:18.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.834 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:27:18.834 10:22:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.834 10:22:31 -- nvmf/common.sh@410 -- # return 0 00:27:18.834 10:22:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:18.834 10:22:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.834 10:22:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:18.834 10:22:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.834 10:22:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:18.834 10:22:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:18.834 10:22:31 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:27:18.834 10:22:31 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:18.834 10:22:31 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:18.834 10:22:31 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:18.834 net.core.busy_poll = 1 00:27:18.834 10:22:31 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:18.834 net.core.busy_read = 1 00:27:18.834 10:22:31 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:18.834 10:22:31 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:18.834 10:22:31 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:18.834 10:22:31 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:18.834 10:22:31 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:18.834 10:22:31 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:18.834 10:22:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:18.834 10:22:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:18.834 10:22:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.834 10:22:31 -- nvmf/common.sh@469 -- # nvmfpid=410404 00:27:18.834 10:22:31 -- nvmf/common.sh@470 -- # waitforlisten 410404 00:27:18.834 10:22:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:18.834 10:22:31 -- common/autotest_common.sh@819 -- # '[' -z 410404 ']' 00:27:18.834 10:22:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.835 10:22:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:18.835 10:22:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.835 10:22:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:18.835 10:22:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.835 [2024-04-24 10:22:31.801667] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:27:18.835 [2024-04-24 10:22:31.801712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.835 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.835 [2024-04-24 10:22:31.858699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.835 [2024-04-24 10:22:31.937444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:18.835 [2024-04-24 10:22:31.937551] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.835 [2024-04-24 10:22:31.937559] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.835 [2024-04-24 10:22:31.937566] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.835 [2024-04-24 10:22:31.937610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.835 [2024-04-24 10:22:31.937706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.835 [2024-04-24 10:22:31.937793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.835 [2024-04-24 10:22:31.937794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.400 10:22:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:19.400 10:22:32 -- common/autotest_common.sh@852 -- # return 0 00:27:19.400 10:22:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:19.400 10:22:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:19.400 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:27:19.400 10:22:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.400 10:22:32 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:27:19.400 10:22:32 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:19.400 10:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.400 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:27:19.400 10:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.400 10:22:32 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:27:19.400 10:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.400 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:27:19.656 10:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.656 10:22:32 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:19.656 10:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.656 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:27:19.656 [2024-04-24 10:22:32.736617] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.656 10:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.656 10:22:32 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:19.656 10:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.656 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:27:19.656 Malloc1 00:27:19.656 10:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.656 10:22:32 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:19.656 10:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.656 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:27:19.656 10:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.656 10:22:32 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:19.656 10:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.656 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:27:19.656 10:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.656 10:22:32 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:19.656 10:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.656 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:27:19.656 [2024-04-24 10:22:32.780123] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.656 10:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.656 10:22:32 -- target/perf_adq.sh@94 -- # perfpid=410580 00:27:19.656 10:22:32 -- target/perf_adq.sh@95 -- # sleep 2 00:27:19.656 10:22:32 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:19.656 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.552 10:22:34 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:27:21.552 10:22:34 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:21.552 10:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:21.552 10:22:34 -- target/perf_adq.sh@97 -- # wc -l 00:27:21.552 10:22:34 -- common/autotest_common.sh@10 -- # set +x 00:27:21.552 10:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:21.809 10:22:34 -- target/perf_adq.sh@97 -- # count=2 00:27:21.809 10:22:34 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:27:21.809 10:22:34 -- target/perf_adq.sh@103 -- # wait 410580 00:27:29.910 Initializing NVMe Controllers 00:27:29.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:29.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:29.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:29.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:29.910 Initialization complete. Launching workers. 00:27:29.910 ======================================================== 00:27:29.910 Latency(us) 00:27:29.910 Device Information : IOPS MiB/s Average min max 00:27:29.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14909.64 58.24 4292.67 1319.05 7025.36 00:27:29.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6519.08 25.47 9820.41 1724.70 57635.39 00:27:29.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5800.88 22.66 11070.98 1321.89 56828.00 00:27:29.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5106.28 19.95 12550.07 1627.43 59343.83 00:27:29.910 ======================================================== 00:27:29.910 Total : 32335.88 126.31 7927.04 1319.05 59343.83 00:27:29.910 00:27:29.910 10:22:42 -- target/perf_adq.sh@104 -- # nvmftestfini 00:27:29.910 10:22:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:29.910 10:22:42 -- nvmf/common.sh@116 -- # sync 00:27:29.910 10:22:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:29.910 10:22:42 -- nvmf/common.sh@119 -- # set +e 00:27:29.910 10:22:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:29.910 10:22:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:29.910 rmmod nvme_tcp 00:27:29.910 rmmod nvme_fabrics 00:27:29.910 rmmod nvme_keyring 00:27:29.910 10:22:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:29.910 10:22:43 -- nvmf/common.sh@123 -- # set -e 00:27:29.910 10:22:43 -- nvmf/common.sh@124 -- # return 0 00:27:29.910 10:22:43 -- nvmf/common.sh@477 -- # '[' -n 410404 ']' 00:27:29.910 10:22:43 -- nvmf/common.sh@478 -- # killprocess 410404 00:27:29.910 10:22:43 -- common/autotest_common.sh@926 -- # '[' -z 410404 ']' 00:27:29.910 10:22:43 -- common/autotest_common.sh@930 -- # kill -0 410404 00:27:29.910 10:22:43 -- common/autotest_common.sh@931 -- # uname 00:27:29.910 10:22:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:29.910 10:22:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 410404 00:27:29.910 10:22:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:29.911 10:22:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:29.911 10:22:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 410404' 00:27:29.911 killing process with pid 410404 00:27:29.911 10:22:43 -- common/autotest_common.sh@945 -- # kill 410404 00:27:29.911 10:22:43 -- common/autotest_common.sh@950 -- # wait 410404 00:27:30.168 10:22:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:30.168 10:22:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:30.168 10:22:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:30.168 10:22:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.168 10:22:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:30.168 10:22:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.168 10:22:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.168 10:22:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.072 10:22:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:32.072 10:22:45 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:27:32.072 00:27:32.072 real 0m49.162s 00:27:32.072 user 2m48.725s 00:27:32.072 sys 0m9.297s 00:27:32.072 10:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.072 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.072 ************************************ 00:27:32.072 END TEST nvmf_perf_adq 00:27:32.072 ************************************ 00:27:32.332 10:22:45 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:32.332 10:22:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:32.332 10:22:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:32.332 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.332 ************************************ 00:27:32.332 START TEST nvmf_shutdown 00:27:32.332 ************************************ 00:27:32.332 10:22:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:32.332 * Looking for test storage... 00:27:32.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:32.332 10:22:45 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.332 10:22:45 -- nvmf/common.sh@7 -- # uname -s 00:27:32.332 10:22:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.332 10:22:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.332 10:22:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.332 10:22:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.332 10:22:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.332 10:22:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.332 10:22:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.332 10:22:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.332 10:22:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.332 10:22:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.332 10:22:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:32.332 10:22:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:32.332 10:22:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.332 10:22:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.332 10:22:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.332 10:22:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.332 10:22:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.332 10:22:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.332 10:22:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.332 10:22:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.332 10:22:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.332 10:22:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.332 10:22:45 -- paths/export.sh@5 -- # export PATH 00:27:32.332 10:22:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.332 10:22:45 -- nvmf/common.sh@46 -- # : 0 00:27:32.332 10:22:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:32.332 10:22:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:32.332 10:22:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:32.332 10:22:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.332 10:22:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.332 10:22:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:32.332 10:22:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:32.332 10:22:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:32.332 10:22:45 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:32.332 10:22:45 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:32.332 10:22:45 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:32.332 10:22:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:32.332 10:22:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:32.332 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.332 ************************************ 00:27:32.332 START TEST nvmf_shutdown_tc1 00:27:32.332 ************************************ 00:27:32.332 10:22:45 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:27:32.332 10:22:45 -- target/shutdown.sh@74 -- # starttarget 00:27:32.332 10:22:45 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:32.332 10:22:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:32.332 10:22:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.332 10:22:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:32.332 10:22:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:32.332 10:22:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:32.332 10:22:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.332 10:22:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.332 10:22:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.332 10:22:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:32.332 10:22:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:32.332 10:22:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:32.332 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:27:37.602 10:22:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:37.602 10:22:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:37.602 10:22:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:37.602 10:22:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:37.602 10:22:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:37.602 10:22:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:37.602 10:22:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:37.602 10:22:50 -- nvmf/common.sh@294 -- # net_devs=() 00:27:37.602 10:22:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:37.602 10:22:50 -- nvmf/common.sh@295 -- # e810=() 00:27:37.602 10:22:50 -- nvmf/common.sh@295 -- # local -ga e810 00:27:37.602 10:22:50 -- nvmf/common.sh@296 -- # x722=() 00:27:37.602 10:22:50 -- nvmf/common.sh@296 -- # local -ga x722 00:27:37.602 10:22:50 -- nvmf/common.sh@297 -- # mlx=() 00:27:37.603 10:22:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:37.603 10:22:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.603 10:22:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:37.603 10:22:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:37.603 10:22:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:37.603 10:22:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:37.603 10:22:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:37.603 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:37.603 10:22:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:37.603 10:22:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:37.603 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:37.603 10:22:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:37.603 10:22:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:37.603 10:22:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.603 10:22:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:37.603 10:22:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.603 10:22:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:37.603 Found net devices under 0000:86:00.0: cvl_0_0 00:27:37.603 10:22:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.603 10:22:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:37.603 10:22:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.603 10:22:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:37.603 10:22:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.603 10:22:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:37.603 Found net devices under 0000:86:00.1: cvl_0_1 00:27:37.603 10:22:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.603 10:22:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:37.603 10:22:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:37.603 10:22:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:37.603 10:22:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.603 10:22:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.603 10:22:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.603 10:22:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:37.603 10:22:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.603 10:22:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.603 10:22:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:37.603 10:22:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.603 10:22:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.603 10:22:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:37.603 10:22:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:37.603 10:22:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.603 10:22:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.603 10:22:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.603 10:22:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.603 10:22:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:37.603 10:22:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.603 10:22:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.603 10:22:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.603 10:22:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:37.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:27:37.603 00:27:37.603 --- 10.0.0.2 ping statistics --- 00:27:37.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.603 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:27:37.603 10:22:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:27:37.603 00:27:37.603 --- 10.0.0.1 ping statistics --- 00:27:37.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.603 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:27:37.603 10:22:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.603 10:22:50 -- nvmf/common.sh@410 -- # return 0 00:27:37.603 10:22:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:37.603 10:22:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.603 10:22:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:37.603 10:22:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.603 10:22:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:37.603 10:22:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:37.603 10:22:50 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:37.603 10:22:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:37.603 10:22:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:37.603 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:27:37.603 10:22:50 -- nvmf/common.sh@469 -- # nvmfpid=415833 00:27:37.603 10:22:50 -- nvmf/common.sh@470 -- # waitforlisten 415833 00:27:37.603 10:22:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:37.603 10:22:50 -- common/autotest_common.sh@819 -- # '[' -z 415833 ']' 00:27:37.603 10:22:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.603 10:22:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:37.603 10:22:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.603 10:22:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:37.603 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:27:37.603 [2024-04-24 10:22:50.677445] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:27:37.603 [2024-04-24 10:22:50.677485] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.603 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.603 [2024-04-24 10:22:50.733688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.603 [2024-04-24 10:22:50.810840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:37.603 [2024-04-24 10:22:50.810951] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.603 [2024-04-24 10:22:50.810958] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.603 [2024-04-24 10:22:50.810964] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.603 [2024-04-24 10:22:50.811062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.603 [2024-04-24 10:22:50.811144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.603 [2024-04-24 10:22:50.811251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.603 [2024-04-24 10:22:50.811252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:38.537 10:22:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:38.537 10:22:51 -- common/autotest_common.sh@852 -- # return 0 00:27:38.537 10:22:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:38.537 10:22:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:38.537 10:22:51 -- common/autotest_common.sh@10 -- # set +x 00:27:38.537 10:22:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.537 10:22:51 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:38.537 10:22:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:38.537 10:22:51 -- common/autotest_common.sh@10 -- # set +x 00:27:38.537 [2024-04-24 10:22:51.525411] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.537 10:22:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:38.537 10:22:51 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:38.537 10:22:51 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:38.537 10:22:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:38.537 10:22:51 -- common/autotest_common.sh@10 -- # set +x 00:27:38.537 10:22:51 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.537 10:22:51 -- target/shutdown.sh@28 -- # cat 00:27:38.537 10:22:51 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:38.537 10:22:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:38.537 10:22:51 -- common/autotest_common.sh@10 -- # set +x 00:27:38.537 Malloc1 00:27:38.537 [2024-04-24 10:22:51.621401] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.537 Malloc2 00:27:38.537 Malloc3 00:27:38.537 Malloc4 00:27:38.537 Malloc5 00:27:38.537 Malloc6 00:27:38.796 Malloc7 00:27:38.796 Malloc8 00:27:38.796 Malloc9 00:27:38.796 Malloc10 00:27:38.796 10:22:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:38.796 10:22:52 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:38.796 10:22:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:38.796 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:27:38.796 10:22:52 -- target/shutdown.sh@78 -- # perfpid=416116 00:27:38.796 10:22:52 -- target/shutdown.sh@79 -- # waitforlisten 416116 /var/tmp/bdevperf.sock 00:27:38.796 10:22:52 -- common/autotest_common.sh@819 -- # '[' -z 416116 ']' 00:27:38.796 10:22:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:38.796 10:22:52 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:38.796 10:22:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:38.796 10:22:52 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:38.796 10:22:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:38.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:38.796 10:22:52 -- nvmf/common.sh@520 -- # config=() 00:27:38.796 10:22:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:38.796 10:22:52 -- nvmf/common.sh@520 -- # local subsystem config 00:27:38.796 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:27:38.796 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.796 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.796 { 00:27:38.796 "params": { 00:27:38.796 "name": "Nvme$subsystem", 00:27:38.796 "trtype": "$TEST_TRANSPORT", 00:27:38.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.796 "adrfam": "ipv4", 00:27:38.796 "trsvcid": "$NVMF_PORT", 00:27:38.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.796 "hdgst": ${hdgst:-false}, 00:27:38.796 "ddgst": ${ddgst:-false} 00:27:38.796 }, 00:27:38.796 "method": "bdev_nvme_attach_controller" 00:27:38.796 } 00:27:38.796 EOF 00:27:38.796 )") 00:27:38.796 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:38.796 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.796 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.796 { 00:27:38.796 "params": { 00:27:38.796 "name": "Nvme$subsystem", 00:27:38.796 "trtype": "$TEST_TRANSPORT", 00:27:38.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.796 "adrfam": "ipv4", 00:27:38.796 "trsvcid": "$NVMF_PORT", 00:27:38.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.796 "hdgst": ${hdgst:-false}, 00:27:38.796 "ddgst": ${ddgst:-false} 00:27:38.796 }, 00:27:38.796 "method": "bdev_nvme_attach_controller" 00:27:38.796 } 00:27:38.796 EOF 00:27:38.796 )") 00:27:38.796 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:38.796 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.796 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.796 { 00:27:38.796 "params": { 00:27:38.796 "name": "Nvme$subsystem", 00:27:38.796 "trtype": "$TEST_TRANSPORT", 00:27:38.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.796 "adrfam": "ipv4", 00:27:38.796 "trsvcid": "$NVMF_PORT", 00:27:38.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.796 "hdgst": ${hdgst:-false}, 00:27:38.796 "ddgst": ${ddgst:-false} 00:27:38.796 }, 00:27:38.796 "method": "bdev_nvme_attach_controller" 00:27:38.796 } 00:27:38.796 EOF 00:27:38.796 )") 00:27:38.796 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:38.796 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.796 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.796 { 00:27:38.796 "params": { 00:27:38.796 "name": "Nvme$subsystem", 00:27:38.796 "trtype": "$TEST_TRANSPORT", 00:27:38.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.796 "adrfam": "ipv4", 00:27:38.796 "trsvcid": "$NVMF_PORT", 00:27:38.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.796 "hdgst": ${hdgst:-false}, 00:27:38.796 "ddgst": ${ddgst:-false} 00:27:38.796 }, 00:27:38.796 "method": "bdev_nvme_attach_controller" 00:27:38.796 } 00:27:38.796 EOF 00:27:38.796 )") 00:27:38.796 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:39.055 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:39.055 { 00:27:39.055 "params": { 00:27:39.055 "name": "Nvme$subsystem", 00:27:39.055 "trtype": "$TEST_TRANSPORT", 00:27:39.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.055 "adrfam": "ipv4", 00:27:39.055 "trsvcid": "$NVMF_PORT", 00:27:39.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.055 "hdgst": ${hdgst:-false}, 00:27:39.055 "ddgst": ${ddgst:-false} 00:27:39.055 }, 00:27:39.055 "method": "bdev_nvme_attach_controller" 00:27:39.055 } 00:27:39.055 EOF 00:27:39.055 )") 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:39.055 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:39.055 { 00:27:39.055 "params": { 00:27:39.055 "name": "Nvme$subsystem", 00:27:39.055 "trtype": "$TEST_TRANSPORT", 00:27:39.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.055 "adrfam": "ipv4", 00:27:39.055 "trsvcid": "$NVMF_PORT", 00:27:39.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.055 "hdgst": ${hdgst:-false}, 00:27:39.055 "ddgst": ${ddgst:-false} 00:27:39.055 }, 00:27:39.055 "method": "bdev_nvme_attach_controller" 00:27:39.055 } 00:27:39.055 EOF 00:27:39.055 )") 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:39.055 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:39.055 [2024-04-24 10:22:52.088340] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:27:39.055 [2024-04-24 10:22:52.088389] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:39.055 { 00:27:39.055 "params": { 00:27:39.055 "name": "Nvme$subsystem", 00:27:39.055 "trtype": "$TEST_TRANSPORT", 00:27:39.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.055 "adrfam": "ipv4", 00:27:39.055 "trsvcid": "$NVMF_PORT", 00:27:39.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.055 "hdgst": ${hdgst:-false}, 00:27:39.055 "ddgst": ${ddgst:-false} 00:27:39.055 }, 00:27:39.055 "method": "bdev_nvme_attach_controller" 00:27:39.055 } 00:27:39.055 EOF 00:27:39.055 )") 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:39.055 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:39.055 { 00:27:39.055 "params": { 00:27:39.055 "name": "Nvme$subsystem", 00:27:39.055 "trtype": "$TEST_TRANSPORT", 00:27:39.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.055 "adrfam": "ipv4", 00:27:39.055 "trsvcid": "$NVMF_PORT", 00:27:39.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.055 "hdgst": ${hdgst:-false}, 00:27:39.055 "ddgst": ${ddgst:-false} 00:27:39.055 }, 00:27:39.055 "method": "bdev_nvme_attach_controller" 00:27:39.055 } 00:27:39.055 EOF 00:27:39.055 )") 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:39.055 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:39.055 { 00:27:39.055 "params": { 00:27:39.055 "name": "Nvme$subsystem", 00:27:39.055 "trtype": "$TEST_TRANSPORT", 00:27:39.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.055 "adrfam": "ipv4", 00:27:39.055 "trsvcid": "$NVMF_PORT", 00:27:39.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.055 "hdgst": ${hdgst:-false}, 00:27:39.055 "ddgst": ${ddgst:-false} 00:27:39.055 }, 00:27:39.055 "method": "bdev_nvme_attach_controller" 00:27:39.055 } 00:27:39.055 EOF 00:27:39.055 )") 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:39.055 10:22:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:39.055 { 00:27:39.055 "params": { 00:27:39.055 "name": "Nvme$subsystem", 00:27:39.055 "trtype": "$TEST_TRANSPORT", 00:27:39.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.055 "adrfam": "ipv4", 00:27:39.055 "trsvcid": "$NVMF_PORT", 00:27:39.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.055 "hdgst": ${hdgst:-false}, 00:27:39.055 "ddgst": ${ddgst:-false} 00:27:39.055 }, 00:27:39.055 "method": "bdev_nvme_attach_controller" 00:27:39.055 } 00:27:39.055 EOF 00:27:39.055 )") 00:27:39.055 10:22:52 -- nvmf/common.sh@542 -- # cat 00:27:39.055 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.056 10:22:52 -- nvmf/common.sh@544 -- # jq . 00:27:39.056 10:22:52 -- nvmf/common.sh@545 -- # IFS=, 00:27:39.056 10:22:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme1", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 },{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme2", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 },{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme3", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 },{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme4", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 },{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme5", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 },{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme6", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 },{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme7", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 },{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme8", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 },{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme9", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 },{ 00:27:39.056 "params": { 00:27:39.056 "name": "Nvme10", 00:27:39.056 "trtype": "tcp", 00:27:39.056 "traddr": "10.0.0.2", 00:27:39.056 "adrfam": "ipv4", 00:27:39.056 "trsvcid": "4420", 00:27:39.056 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:39.056 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:39.056 "hdgst": false, 00:27:39.056 "ddgst": false 00:27:39.056 }, 00:27:39.056 "method": "bdev_nvme_attach_controller" 00:27:39.056 }' 00:27:39.056 [2024-04-24 10:22:52.144305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.056 [2024-04-24 10:22:52.215501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.428 10:22:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:40.428 10:22:53 -- common/autotest_common.sh@852 -- # return 0 00:27:40.428 10:22:53 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:40.428 10:22:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:40.428 10:22:53 -- common/autotest_common.sh@10 -- # set +x 00:27:40.428 10:22:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.428 10:22:53 -- target/shutdown.sh@83 -- # kill -9 416116 00:27:40.428 10:22:53 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:40.428 10:22:53 -- target/shutdown.sh@87 -- # sleep 1 00:27:41.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 416116 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:41.363 10:22:54 -- target/shutdown.sh@88 -- # kill -0 415833 00:27:41.363 10:22:54 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:41.363 10:22:54 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:41.363 10:22:54 -- nvmf/common.sh@520 -- # config=() 00:27:41.363 10:22:54 -- nvmf/common.sh@520 -- # local subsystem config 00:27:41.363 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.363 { 00:27:41.363 "params": { 00:27:41.363 "name": "Nvme$subsystem", 00:27:41.363 "trtype": "$TEST_TRANSPORT", 00:27:41.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.363 "adrfam": "ipv4", 00:27:41.363 "trsvcid": "$NVMF_PORT", 00:27:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.363 "hdgst": ${hdgst:-false}, 00:27:41.363 "ddgst": ${ddgst:-false} 00:27:41.363 }, 00:27:41.363 "method": "bdev_nvme_attach_controller" 00:27:41.363 } 00:27:41.363 EOF 00:27:41.363 )") 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.363 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.363 { 00:27:41.363 "params": { 00:27:41.363 "name": "Nvme$subsystem", 00:27:41.363 "trtype": "$TEST_TRANSPORT", 00:27:41.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.363 "adrfam": "ipv4", 00:27:41.363 "trsvcid": "$NVMF_PORT", 00:27:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.363 "hdgst": ${hdgst:-false}, 00:27:41.363 "ddgst": ${ddgst:-false} 00:27:41.363 }, 00:27:41.363 "method": "bdev_nvme_attach_controller" 00:27:41.363 } 00:27:41.363 EOF 00:27:41.363 )") 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.363 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.363 { 00:27:41.363 "params": { 00:27:41.363 "name": "Nvme$subsystem", 00:27:41.363 "trtype": "$TEST_TRANSPORT", 00:27:41.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.363 "adrfam": "ipv4", 00:27:41.363 "trsvcid": "$NVMF_PORT", 00:27:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.363 "hdgst": ${hdgst:-false}, 00:27:41.363 "ddgst": ${ddgst:-false} 00:27:41.363 }, 00:27:41.363 "method": "bdev_nvme_attach_controller" 00:27:41.363 } 00:27:41.363 EOF 00:27:41.363 )") 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.363 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.363 { 00:27:41.363 "params": { 00:27:41.363 "name": "Nvme$subsystem", 00:27:41.363 "trtype": "$TEST_TRANSPORT", 00:27:41.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.363 "adrfam": "ipv4", 00:27:41.363 "trsvcid": "$NVMF_PORT", 00:27:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.363 "hdgst": ${hdgst:-false}, 00:27:41.363 "ddgst": ${ddgst:-false} 00:27:41.363 }, 00:27:41.363 "method": "bdev_nvme_attach_controller" 00:27:41.363 } 00:27:41.363 EOF 00:27:41.363 )") 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.363 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.363 { 00:27:41.363 "params": { 00:27:41.363 "name": "Nvme$subsystem", 00:27:41.363 "trtype": "$TEST_TRANSPORT", 00:27:41.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.363 "adrfam": "ipv4", 00:27:41.363 "trsvcid": "$NVMF_PORT", 00:27:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.363 "hdgst": ${hdgst:-false}, 00:27:41.363 "ddgst": ${ddgst:-false} 00:27:41.363 }, 00:27:41.363 "method": "bdev_nvme_attach_controller" 00:27:41.363 } 00:27:41.363 EOF 00:27:41.363 )") 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.363 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.363 { 00:27:41.363 "params": { 00:27:41.363 "name": "Nvme$subsystem", 00:27:41.363 "trtype": "$TEST_TRANSPORT", 00:27:41.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.363 "adrfam": "ipv4", 00:27:41.363 "trsvcid": "$NVMF_PORT", 00:27:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.363 "hdgst": ${hdgst:-false}, 00:27:41.363 "ddgst": ${ddgst:-false} 00:27:41.363 }, 00:27:41.363 "method": "bdev_nvme_attach_controller" 00:27:41.363 } 00:27:41.363 EOF 00:27:41.363 )") 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.363 [2024-04-24 10:22:54.625996] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:27:41.363 [2024-04-24 10:22:54.626044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416521 ] 00:27:41.363 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.363 { 00:27:41.363 "params": { 00:27:41.363 "name": "Nvme$subsystem", 00:27:41.363 "trtype": "$TEST_TRANSPORT", 00:27:41.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.363 "adrfam": "ipv4", 00:27:41.363 "trsvcid": "$NVMF_PORT", 00:27:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.363 "hdgst": ${hdgst:-false}, 00:27:41.363 "ddgst": ${ddgst:-false} 00:27:41.363 }, 00:27:41.363 "method": "bdev_nvme_attach_controller" 00:27:41.363 } 00:27:41.363 EOF 00:27:41.363 )") 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.363 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.363 { 00:27:41.363 "params": { 00:27:41.363 "name": "Nvme$subsystem", 00:27:41.363 "trtype": "$TEST_TRANSPORT", 00:27:41.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.363 "adrfam": "ipv4", 00:27:41.363 "trsvcid": "$NVMF_PORT", 00:27:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.363 "hdgst": ${hdgst:-false}, 00:27:41.363 "ddgst": ${ddgst:-false} 00:27:41.363 }, 00:27:41.363 "method": "bdev_nvme_attach_controller" 00:27:41.363 } 00:27:41.363 EOF 00:27:41.363 )") 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.363 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.363 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.363 { 00:27:41.363 "params": { 00:27:41.363 "name": "Nvme$subsystem", 00:27:41.363 "trtype": "$TEST_TRANSPORT", 00:27:41.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.363 "adrfam": "ipv4", 00:27:41.363 "trsvcid": "$NVMF_PORT", 00:27:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.363 "hdgst": ${hdgst:-false}, 00:27:41.363 "ddgst": ${ddgst:-false} 00:27:41.363 }, 00:27:41.363 "method": "bdev_nvme_attach_controller" 00:27:41.363 } 00:27:41.363 EOF 00:27:41.363 )") 00:27:41.621 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.621 10:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:41.621 10:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:41.621 { 00:27:41.621 "params": { 00:27:41.621 "name": "Nvme$subsystem", 00:27:41.621 "trtype": "$TEST_TRANSPORT", 00:27:41.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.621 "adrfam": "ipv4", 00:27:41.621 "trsvcid": "$NVMF_PORT", 00:27:41.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.621 "hdgst": ${hdgst:-false}, 00:27:41.621 "ddgst": ${ddgst:-false} 00:27:41.621 }, 00:27:41.621 "method": "bdev_nvme_attach_controller" 00:27:41.621 } 00:27:41.621 EOF 00:27:41.621 )") 00:27:41.621 10:22:54 -- nvmf/common.sh@542 -- # cat 00:27:41.621 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.621 10:22:54 -- nvmf/common.sh@544 -- # jq . 00:27:41.621 10:22:54 -- nvmf/common.sh@545 -- # IFS=, 00:27:41.621 10:22:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:41.621 "params": { 00:27:41.621 "name": "Nvme1", 00:27:41.621 "trtype": "tcp", 00:27:41.621 "traddr": "10.0.0.2", 00:27:41.621 "adrfam": "ipv4", 00:27:41.621 "trsvcid": "4420", 00:27:41.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:41.621 "hdgst": false, 00:27:41.621 "ddgst": false 00:27:41.621 }, 00:27:41.621 "method": "bdev_nvme_attach_controller" 00:27:41.621 },{ 00:27:41.621 "params": { 00:27:41.621 "name": "Nvme2", 00:27:41.621 "trtype": "tcp", 00:27:41.621 "traddr": "10.0.0.2", 00:27:41.621 "adrfam": "ipv4", 00:27:41.621 "trsvcid": "4420", 00:27:41.621 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:41.621 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:41.621 "hdgst": false, 00:27:41.621 "ddgst": false 00:27:41.622 }, 00:27:41.622 "method": "bdev_nvme_attach_controller" 00:27:41.622 },{ 00:27:41.622 "params": { 00:27:41.622 "name": "Nvme3", 00:27:41.622 "trtype": "tcp", 00:27:41.622 "traddr": "10.0.0.2", 00:27:41.622 "adrfam": "ipv4", 00:27:41.622 "trsvcid": "4420", 00:27:41.622 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:41.622 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:41.622 "hdgst": false, 00:27:41.622 "ddgst": false 00:27:41.622 }, 00:27:41.622 "method": "bdev_nvme_attach_controller" 00:27:41.622 },{ 00:27:41.622 "params": { 00:27:41.622 "name": "Nvme4", 00:27:41.622 "trtype": "tcp", 00:27:41.622 "traddr": "10.0.0.2", 00:27:41.622 "adrfam": "ipv4", 00:27:41.622 "trsvcid": "4420", 00:27:41.622 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:41.622 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:41.622 "hdgst": false, 00:27:41.622 "ddgst": false 00:27:41.622 }, 00:27:41.622 "method": "bdev_nvme_attach_controller" 00:27:41.622 },{ 00:27:41.622 "params": { 00:27:41.622 "name": "Nvme5", 00:27:41.622 "trtype": "tcp", 00:27:41.622 "traddr": "10.0.0.2", 00:27:41.622 "adrfam": "ipv4", 00:27:41.622 "trsvcid": "4420", 00:27:41.622 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:41.622 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:41.622 "hdgst": false, 00:27:41.622 "ddgst": false 00:27:41.622 }, 00:27:41.622 "method": "bdev_nvme_attach_controller" 00:27:41.622 },{ 00:27:41.622 "params": { 00:27:41.622 "name": "Nvme6", 00:27:41.622 "trtype": "tcp", 00:27:41.622 "traddr": "10.0.0.2", 00:27:41.622 "adrfam": "ipv4", 00:27:41.622 "trsvcid": "4420", 00:27:41.622 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:41.622 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:41.622 "hdgst": false, 00:27:41.622 "ddgst": false 00:27:41.622 }, 00:27:41.622 "method": "bdev_nvme_attach_controller" 00:27:41.622 },{ 00:27:41.622 "params": { 00:27:41.622 "name": "Nvme7", 00:27:41.622 "trtype": "tcp", 00:27:41.622 "traddr": "10.0.0.2", 00:27:41.622 "adrfam": "ipv4", 00:27:41.622 "trsvcid": "4420", 00:27:41.622 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:41.622 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:41.622 "hdgst": false, 00:27:41.622 "ddgst": false 00:27:41.622 }, 00:27:41.622 "method": "bdev_nvme_attach_controller" 00:27:41.622 },{ 00:27:41.622 "params": { 00:27:41.622 "name": "Nvme8", 00:27:41.622 "trtype": "tcp", 00:27:41.622 "traddr": "10.0.0.2", 00:27:41.622 "adrfam": "ipv4", 00:27:41.622 "trsvcid": "4420", 00:27:41.622 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:41.622 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:41.622 "hdgst": false, 00:27:41.622 "ddgst": false 00:27:41.622 }, 00:27:41.622 "method": "bdev_nvme_attach_controller" 00:27:41.622 },{ 00:27:41.622 "params": { 00:27:41.622 "name": "Nvme9", 00:27:41.622 "trtype": "tcp", 00:27:41.622 "traddr": "10.0.0.2", 00:27:41.622 "adrfam": "ipv4", 00:27:41.622 "trsvcid": "4420", 00:27:41.622 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:41.622 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:41.622 "hdgst": false, 00:27:41.622 "ddgst": false 00:27:41.622 }, 00:27:41.622 "method": "bdev_nvme_attach_controller" 00:27:41.622 },{ 00:27:41.622 "params": { 00:27:41.622 "name": "Nvme10", 00:27:41.622 "trtype": "tcp", 00:27:41.622 "traddr": "10.0.0.2", 00:27:41.622 "adrfam": "ipv4", 00:27:41.622 "trsvcid": "4420", 00:27:41.622 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:41.622 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:41.622 "hdgst": false, 00:27:41.622 "ddgst": false 00:27:41.622 }, 00:27:41.622 "method": "bdev_nvme_attach_controller" 00:27:41.622 }' 00:27:41.622 [2024-04-24 10:22:54.682379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.622 [2024-04-24 10:22:54.754559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.995 Running I/O for 1 seconds... 00:27:44.369 00:27:44.369 Latency(us) 00:27:44.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.369 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme1n1 : 1.04 465.73 29.11 0.00 0.00 134943.51 20971.52 124917.31 00:27:44.369 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme2n1 : 1.09 445.89 27.87 0.00 0.00 134601.24 25530.55 116711.07 00:27:44.369 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme3n1 : 1.08 485.35 30.33 0.00 0.00 128361.80 15842.62 134035.37 00:27:44.369 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme4n1 : 1.08 484.88 30.30 0.00 0.00 127772.49 15158.76 127652.73 00:27:44.369 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme5n1 : 1.07 453.87 28.37 0.00 0.00 135059.15 12936.24 121270.09 00:27:44.369 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme6n1 : 1.12 464.60 29.04 0.00 0.00 127377.34 12708.29 134035.37 00:27:44.369 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme7n1 : 1.07 486.72 30.42 0.00 0.00 124952.52 14246.96 107137.11 00:27:44.369 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme8n1 : 1.08 485.37 30.34 0.00 0.00 124394.99 2877.89 114431.55 00:27:44.369 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme9n1 : 1.09 487.40 30.46 0.00 0.00 123637.40 8092.27 103489.89 00:27:44.369 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.369 Verification LBA range: start 0x0 length 0x400 00:27:44.369 Nvme10n1 : 1.13 469.72 29.36 0.00 0.00 123271.03 6781.55 104857.60 00:27:44.369 =================================================================================================================== 00:27:44.369 Total : 4729.53 295.60 0.00 0.00 128275.98 2877.89 134035.37 00:27:44.369 10:22:57 -- target/shutdown.sh@93 -- # stoptarget 00:27:44.369 10:22:57 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:44.369 10:22:57 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:44.369 10:22:57 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:44.369 10:22:57 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:44.369 10:22:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:44.369 10:22:57 -- nvmf/common.sh@116 -- # sync 00:27:44.369 10:22:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:44.369 10:22:57 -- nvmf/common.sh@119 -- # set +e 00:27:44.369 10:22:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:44.369 10:22:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:44.369 rmmod nvme_tcp 00:27:44.369 rmmod nvme_fabrics 00:27:44.369 rmmod nvme_keyring 00:27:44.369 10:22:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:44.369 10:22:57 -- nvmf/common.sh@123 -- # set -e 00:27:44.369 10:22:57 -- nvmf/common.sh@124 -- # return 0 00:27:44.369 10:22:57 -- nvmf/common.sh@477 -- # '[' -n 415833 ']' 00:27:44.369 10:22:57 -- nvmf/common.sh@478 -- # killprocess 415833 00:27:44.369 10:22:57 -- common/autotest_common.sh@926 -- # '[' -z 415833 ']' 00:27:44.369 10:22:57 -- common/autotest_common.sh@930 -- # kill -0 415833 00:27:44.369 10:22:57 -- common/autotest_common.sh@931 -- # uname 00:27:44.369 10:22:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:44.369 10:22:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 415833 00:27:44.369 10:22:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:44.369 10:22:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:44.369 10:22:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 415833' 00:27:44.369 killing process with pid 415833 00:27:44.369 10:22:57 -- common/autotest_common.sh@945 -- # kill 415833 00:27:44.369 10:22:57 -- common/autotest_common.sh@950 -- # wait 415833 00:27:44.949 10:22:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:44.949 10:22:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:44.949 10:22:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:44.949 10:22:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.949 10:22:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:44.949 10:22:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.949 10:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.949 10:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.920 10:23:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:46.920 00:27:46.920 real 0m14.613s 00:27:46.920 user 0m34.198s 00:27:46.920 sys 0m5.196s 00:27:46.920 10:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.920 10:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:46.920 ************************************ 00:27:46.920 END TEST nvmf_shutdown_tc1 00:27:46.920 ************************************ 00:27:46.920 10:23:00 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:46.920 10:23:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:46.920 10:23:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:46.920 10:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:46.920 ************************************ 00:27:46.920 START TEST nvmf_shutdown_tc2 00:27:46.920 ************************************ 00:27:46.920 10:23:00 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:27:46.920 10:23:00 -- target/shutdown.sh@98 -- # starttarget 00:27:46.920 10:23:00 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:46.920 10:23:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:46.920 10:23:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.920 10:23:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:46.920 10:23:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:46.920 10:23:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:46.920 10:23:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.920 10:23:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.920 10:23:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.920 10:23:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:46.920 10:23:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:46.920 10:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:46.920 10:23:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:46.920 10:23:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:46.920 10:23:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:46.920 10:23:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:46.920 10:23:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:46.920 10:23:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:46.920 10:23:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:46.920 10:23:00 -- nvmf/common.sh@294 -- # net_devs=() 00:27:46.920 10:23:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:46.920 10:23:00 -- nvmf/common.sh@295 -- # e810=() 00:27:46.920 10:23:00 -- nvmf/common.sh@295 -- # local -ga e810 00:27:46.920 10:23:00 -- nvmf/common.sh@296 -- # x722=() 00:27:46.920 10:23:00 -- nvmf/common.sh@296 -- # local -ga x722 00:27:46.920 10:23:00 -- nvmf/common.sh@297 -- # mlx=() 00:27:46.920 10:23:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:46.920 10:23:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.920 10:23:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:46.920 10:23:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:46.920 10:23:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:46.920 10:23:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:46.920 10:23:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:46.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:46.920 10:23:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:46.920 10:23:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:46.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:46.920 10:23:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:46.920 10:23:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:46.920 10:23:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.920 10:23:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:46.920 10:23:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.920 10:23:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:46.920 Found net devices under 0000:86:00.0: cvl_0_0 00:27:46.920 10:23:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.920 10:23:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:46.920 10:23:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.920 10:23:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:46.920 10:23:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.920 10:23:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:46.920 Found net devices under 0000:86:00.1: cvl_0_1 00:27:46.920 10:23:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.920 10:23:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:46.920 10:23:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:46.920 10:23:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:46.920 10:23:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:46.920 10:23:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.920 10:23:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.920 10:23:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.920 10:23:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:46.920 10:23:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.920 10:23:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.920 10:23:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:46.920 10:23:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.920 10:23:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.920 10:23:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:46.920 10:23:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:46.920 10:23:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.920 10:23:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.180 10:23:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.180 10:23:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.180 10:23:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:47.180 10:23:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.180 10:23:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.180 10:23:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.180 10:23:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:47.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:27:47.180 00:27:47.180 --- 10.0.0.2 ping statistics --- 00:27:47.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.180 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:47.180 10:23:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:27:47.180 00:27:47.180 --- 10.0.0.1 ping statistics --- 00:27:47.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.180 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:27:47.180 10:23:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.180 10:23:00 -- nvmf/common.sh@410 -- # return 0 00:27:47.180 10:23:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:47.180 10:23:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.180 10:23:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:47.180 10:23:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:47.180 10:23:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.180 10:23:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:47.180 10:23:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:47.180 10:23:00 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:47.180 10:23:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:47.180 10:23:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:47.180 10:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:47.180 10:23:00 -- nvmf/common.sh@469 -- # nvmfpid=417630 00:27:47.180 10:23:00 -- nvmf/common.sh@470 -- # waitforlisten 417630 00:27:47.180 10:23:00 -- common/autotest_common.sh@819 -- # '[' -z 417630 ']' 00:27:47.180 10:23:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.180 10:23:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:47.180 10:23:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.180 10:23:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:47.180 10:23:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:47.180 10:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:47.180 [2024-04-24 10:23:00.458484] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:27:47.180 [2024-04-24 10:23:00.458528] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.439 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.439 [2024-04-24 10:23:00.515349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.439 [2024-04-24 10:23:00.592883] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:47.439 [2024-04-24 10:23:00.592992] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.439 [2024-04-24 10:23:00.593000] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.439 [2024-04-24 10:23:00.593007] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.439 [2024-04-24 10:23:00.593046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.439 [2024-04-24 10:23:00.593147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.439 [2024-04-24 10:23:00.593255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.439 [2024-04-24 10:23:00.593256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:48.005 10:23:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:48.005 10:23:01 -- common/autotest_common.sh@852 -- # return 0 00:27:48.005 10:23:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:48.005 10:23:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:48.005 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:27:48.263 10:23:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.263 10:23:01 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.263 10:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.263 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:27:48.263 [2024-04-24 10:23:01.301350] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.263 10:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.263 10:23:01 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:48.263 10:23:01 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:48.263 10:23:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:48.263 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:27:48.263 10:23:01 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.263 10:23:01 -- target/shutdown.sh@28 -- # cat 00:27:48.263 10:23:01 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:48.263 10:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.263 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:27:48.263 Malloc1 00:27:48.263 [2024-04-24 10:23:01.397278] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.263 Malloc2 00:27:48.263 Malloc3 00:27:48.263 Malloc4 00:27:48.521 Malloc5 00:27:48.521 Malloc6 00:27:48.521 Malloc7 00:27:48.521 Malloc8 00:27:48.521 Malloc9 00:27:48.521 Malloc10 00:27:48.521 10:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.521 10:23:01 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:48.521 10:23:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:48.521 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:27:48.779 10:23:01 -- target/shutdown.sh@102 -- # perfpid=417911 00:27:48.779 10:23:01 -- target/shutdown.sh@103 -- # waitforlisten 417911 /var/tmp/bdevperf.sock 00:27:48.779 10:23:01 -- common/autotest_common.sh@819 -- # '[' -z 417911 ']' 00:27:48.779 10:23:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.779 10:23:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.779 10:23:01 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:48.779 10:23:01 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:48.779 10:23:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.779 10:23:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.779 10:23:01 -- nvmf/common.sh@520 -- # config=() 00:27:48.779 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:27:48.779 10:23:01 -- nvmf/common.sh@520 -- # local subsystem config 00:27:48.779 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.779 { 00:27:48.779 "params": { 00:27:48.779 "name": "Nvme$subsystem", 00:27:48.779 "trtype": "$TEST_TRANSPORT", 00:27:48.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.779 "adrfam": "ipv4", 00:27:48.779 "trsvcid": "$NVMF_PORT", 00:27:48.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.779 "hdgst": ${hdgst:-false}, 00:27:48.779 "ddgst": ${ddgst:-false} 00:27:48.779 }, 00:27:48.779 "method": "bdev_nvme_attach_controller" 00:27:48.779 } 00:27:48.779 EOF 00:27:48.779 )") 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.779 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.779 { 00:27:48.779 "params": { 00:27:48.779 "name": "Nvme$subsystem", 00:27:48.779 "trtype": "$TEST_TRANSPORT", 00:27:48.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.779 "adrfam": "ipv4", 00:27:48.779 "trsvcid": "$NVMF_PORT", 00:27:48.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.779 "hdgst": ${hdgst:-false}, 00:27:48.779 "ddgst": ${ddgst:-false} 00:27:48.779 }, 00:27:48.779 "method": "bdev_nvme_attach_controller" 00:27:48.779 } 00:27:48.779 EOF 00:27:48.779 )") 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.779 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.779 { 00:27:48.779 "params": { 00:27:48.779 "name": "Nvme$subsystem", 00:27:48.779 "trtype": "$TEST_TRANSPORT", 00:27:48.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.779 "adrfam": "ipv4", 00:27:48.779 "trsvcid": "$NVMF_PORT", 00:27:48.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.779 "hdgst": ${hdgst:-false}, 00:27:48.779 "ddgst": ${ddgst:-false} 00:27:48.779 }, 00:27:48.779 "method": "bdev_nvme_attach_controller" 00:27:48.779 } 00:27:48.779 EOF 00:27:48.779 )") 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.779 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.779 { 00:27:48.779 "params": { 00:27:48.779 "name": "Nvme$subsystem", 00:27:48.779 "trtype": "$TEST_TRANSPORT", 00:27:48.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.779 "adrfam": "ipv4", 00:27:48.779 "trsvcid": "$NVMF_PORT", 00:27:48.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.779 "hdgst": ${hdgst:-false}, 00:27:48.779 "ddgst": ${ddgst:-false} 00:27:48.779 }, 00:27:48.779 "method": "bdev_nvme_attach_controller" 00:27:48.779 } 00:27:48.779 EOF 00:27:48.779 )") 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.779 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.779 { 00:27:48.779 "params": { 00:27:48.779 "name": "Nvme$subsystem", 00:27:48.779 "trtype": "$TEST_TRANSPORT", 00:27:48.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.779 "adrfam": "ipv4", 00:27:48.779 "trsvcid": "$NVMF_PORT", 00:27:48.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.779 "hdgst": ${hdgst:-false}, 00:27:48.779 "ddgst": ${ddgst:-false} 00:27:48.779 }, 00:27:48.779 "method": "bdev_nvme_attach_controller" 00:27:48.779 } 00:27:48.779 EOF 00:27:48.779 )") 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.779 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.779 { 00:27:48.779 "params": { 00:27:48.779 "name": "Nvme$subsystem", 00:27:48.779 "trtype": "$TEST_TRANSPORT", 00:27:48.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.779 "adrfam": "ipv4", 00:27:48.779 "trsvcid": "$NVMF_PORT", 00:27:48.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.779 "hdgst": ${hdgst:-false}, 00:27:48.779 "ddgst": ${ddgst:-false} 00:27:48.779 }, 00:27:48.779 "method": "bdev_nvme_attach_controller" 00:27:48.779 } 00:27:48.779 EOF 00:27:48.779 )") 00:27:48.779 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.779 [2024-04-24 10:23:01.862622] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:27:48.779 [2024-04-24 10:23:01.862673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417911 ] 00:27:48.779 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.780 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.780 { 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme$subsystem", 00:27:48.780 "trtype": "$TEST_TRANSPORT", 00:27:48.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "$NVMF_PORT", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.780 "hdgst": ${hdgst:-false}, 00:27:48.780 "ddgst": ${ddgst:-false} 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 } 00:27:48.780 EOF 00:27:48.780 )") 00:27:48.780 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.780 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.780 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.780 { 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme$subsystem", 00:27:48.780 "trtype": "$TEST_TRANSPORT", 00:27:48.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "$NVMF_PORT", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.780 "hdgst": ${hdgst:-false}, 00:27:48.780 "ddgst": ${ddgst:-false} 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 } 00:27:48.780 EOF 00:27:48.780 )") 00:27:48.780 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.780 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.780 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.780 { 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme$subsystem", 00:27:48.780 "trtype": "$TEST_TRANSPORT", 00:27:48.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "$NVMF_PORT", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.780 "hdgst": ${hdgst:-false}, 00:27:48.780 "ddgst": ${ddgst:-false} 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 } 00:27:48.780 EOF 00:27:48.780 )") 00:27:48.780 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.780 10:23:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:48.780 10:23:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:48.780 { 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme$subsystem", 00:27:48.780 "trtype": "$TEST_TRANSPORT", 00:27:48.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "$NVMF_PORT", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.780 "hdgst": ${hdgst:-false}, 00:27:48.780 "ddgst": ${ddgst:-false} 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 } 00:27:48.780 EOF 00:27:48.780 )") 00:27:48.780 10:23:01 -- nvmf/common.sh@542 -- # cat 00:27:48.780 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.780 10:23:01 -- nvmf/common.sh@544 -- # jq . 00:27:48.780 10:23:01 -- nvmf/common.sh@545 -- # IFS=, 00:27:48.780 10:23:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme1", 00:27:48.780 "trtype": "tcp", 00:27:48.780 "traddr": "10.0.0.2", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "4420", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:48.780 "hdgst": false, 00:27:48.780 "ddgst": false 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 },{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme2", 00:27:48.780 "trtype": "tcp", 00:27:48.780 "traddr": "10.0.0.2", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "4420", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:48.780 "hdgst": false, 00:27:48.780 "ddgst": false 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 },{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme3", 00:27:48.780 "trtype": "tcp", 00:27:48.780 "traddr": "10.0.0.2", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "4420", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:48.780 "hdgst": false, 00:27:48.780 "ddgst": false 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 },{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme4", 00:27:48.780 "trtype": "tcp", 00:27:48.780 "traddr": "10.0.0.2", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "4420", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:48.780 "hdgst": false, 00:27:48.780 "ddgst": false 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 },{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme5", 00:27:48.780 "trtype": "tcp", 00:27:48.780 "traddr": "10.0.0.2", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "4420", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:48.780 "hdgst": false, 00:27:48.780 "ddgst": false 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 },{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme6", 00:27:48.780 "trtype": "tcp", 00:27:48.780 "traddr": "10.0.0.2", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "4420", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:48.780 "hdgst": false, 00:27:48.780 "ddgst": false 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 },{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme7", 00:27:48.780 "trtype": "tcp", 00:27:48.780 "traddr": "10.0.0.2", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "4420", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:48.780 "hdgst": false, 00:27:48.780 "ddgst": false 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 },{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme8", 00:27:48.780 "trtype": "tcp", 00:27:48.780 "traddr": "10.0.0.2", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "4420", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:48.780 "hdgst": false, 00:27:48.780 "ddgst": false 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 },{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme9", 00:27:48.780 "trtype": "tcp", 00:27:48.780 "traddr": "10.0.0.2", 00:27:48.780 "adrfam": "ipv4", 00:27:48.780 "trsvcid": "4420", 00:27:48.780 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:48.780 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:48.780 "hdgst": false, 00:27:48.780 "ddgst": false 00:27:48.780 }, 00:27:48.780 "method": "bdev_nvme_attach_controller" 00:27:48.780 },{ 00:27:48.780 "params": { 00:27:48.780 "name": "Nvme10", 00:27:48.780 "trtype": "tcp", 00:27:48.781 "traddr": "10.0.0.2", 00:27:48.781 "adrfam": "ipv4", 00:27:48.781 "trsvcid": "4420", 00:27:48.781 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:48.781 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:48.781 "hdgst": false, 00:27:48.781 "ddgst": false 00:27:48.781 }, 00:27:48.781 "method": "bdev_nvme_attach_controller" 00:27:48.781 }' 00:27:48.781 [2024-04-24 10:23:01.917941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.781 [2024-04-24 10:23:01.996560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.676 Running I/O for 10 seconds... 00:27:50.933 10:23:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:50.933 10:23:04 -- common/autotest_common.sh@852 -- # return 0 00:27:50.933 10:23:04 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:50.933 10:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.933 10:23:04 -- common/autotest_common.sh@10 -- # set +x 00:27:50.933 10:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.933 10:23:04 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:50.933 10:23:04 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:50.933 10:23:04 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:50.933 10:23:04 -- target/shutdown.sh@57 -- # local ret=1 00:27:50.933 10:23:04 -- target/shutdown.sh@58 -- # local i 00:27:50.933 10:23:04 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:50.933 10:23:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:50.933 10:23:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:50.933 10:23:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.933 10:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.933 10:23:04 -- common/autotest_common.sh@10 -- # set +x 00:27:50.933 10:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.933 10:23:04 -- target/shutdown.sh@60 -- # read_io_count=87 00:27:50.933 10:23:04 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:27:50.933 10:23:04 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:51.191 10:23:04 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:51.191 10:23:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:51.191 10:23:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:51.191 10:23:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:51.191 10:23:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.191 10:23:04 -- common/autotest_common.sh@10 -- # set +x 00:27:51.191 10:23:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.191 10:23:04 -- target/shutdown.sh@60 -- # read_io_count=251 00:27:51.191 10:23:04 -- target/shutdown.sh@63 -- # '[' 251 -ge 100 ']' 00:27:51.191 10:23:04 -- target/shutdown.sh@64 -- # ret=0 00:27:51.191 10:23:04 -- target/shutdown.sh@65 -- # break 00:27:51.191 10:23:04 -- target/shutdown.sh@69 -- # return 0 00:27:51.191 10:23:04 -- target/shutdown.sh@109 -- # killprocess 417911 00:27:51.191 10:23:04 -- common/autotest_common.sh@926 -- # '[' -z 417911 ']' 00:27:51.191 10:23:04 -- common/autotest_common.sh@930 -- # kill -0 417911 00:27:51.191 10:23:04 -- common/autotest_common.sh@931 -- # uname 00:27:51.191 10:23:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:51.191 10:23:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 417911 00:27:51.191 10:23:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:51.191 10:23:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:51.191 10:23:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 417911' 00:27:51.191 killing process with pid 417911 00:27:51.191 10:23:04 -- common/autotest_common.sh@945 -- # kill 417911 00:27:51.191 10:23:04 -- common/autotest_common.sh@950 -- # wait 417911 00:27:51.448 Received shutdown signal, test time was about 0.748626 seconds 00:27:51.448 00:27:51.448 Latency(us) 00:27:51.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.448 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.448 Verification LBA range: start 0x0 length 0x400 00:27:51.448 Nvme1n1 : 0.70 469.17 29.32 0.00 0.00 132287.44 6582.09 117622.87 00:27:51.448 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.448 Verification LBA range: start 0x0 length 0x400 00:27:51.448 Nvme2n1 : 0.69 467.91 29.24 0.00 0.00 132366.05 10314.80 122181.90 00:27:51.448 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.448 Verification LBA range: start 0x0 length 0x400 00:27:51.448 Nvme3n1 : 0.71 500.43 31.28 0.00 0.00 122679.06 15842.62 122181.90 00:27:51.448 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.448 Verification LBA range: start 0x0 length 0x400 00:27:51.448 Nvme4n1 : 0.75 475.12 29.70 0.00 0.00 121618.92 17096.35 105313.50 00:27:51.448 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.448 Verification LBA range: start 0x0 length 0x400 00:27:51.448 Nvme5n1 : 0.71 502.46 31.40 0.00 0.00 119953.47 14930.81 110784.33 00:27:51.448 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.448 Verification LBA range: start 0x0 length 0x400 00:27:51.448 Nvme6n1 : 0.71 502.20 31.39 0.00 0.00 118452.65 19147.91 95739.55 00:27:51.448 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.448 Verification LBA range: start 0x0 length 0x400 00:27:51.448 Nvme7n1 : 0.75 474.64 29.67 0.00 0.00 117740.22 18350.08 97563.16 00:27:51.448 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.448 Verification LBA range: start 0x0 length 0x400 00:27:51.448 Nvme8n1 : 0.70 452.85 28.30 0.00 0.00 128445.13 21313.45 101210.38 00:27:51.448 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.448 Verification LBA range: start 0x0 length 0x400 00:27:51.449 Nvme9n1 : 0.70 449.85 28.12 0.00 0.00 127767.07 20515.62 103489.89 00:27:51.449 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:51.449 Verification LBA range: start 0x0 length 0x400 00:27:51.449 Nvme10n1 : 0.70 461.07 28.82 0.00 0.00 123794.90 8833.11 103033.99 00:27:51.449 =================================================================================================================== 00:27:51.449 Total : 4755.71 297.23 0.00 0.00 124297.91 6582.09 122181.90 00:27:51.705 10:23:04 -- target/shutdown.sh@112 -- # sleep 1 00:27:52.636 10:23:05 -- target/shutdown.sh@113 -- # kill -0 417630 00:27:52.636 10:23:05 -- target/shutdown.sh@115 -- # stoptarget 00:27:52.636 10:23:05 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:52.636 10:23:05 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:52.636 10:23:05 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.636 10:23:05 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:52.636 10:23:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:52.636 10:23:05 -- nvmf/common.sh@116 -- # sync 00:27:52.636 10:23:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:52.636 10:23:05 -- nvmf/common.sh@119 -- # set +e 00:27:52.636 10:23:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:52.636 10:23:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:52.636 rmmod nvme_tcp 00:27:52.636 rmmod nvme_fabrics 00:27:52.636 rmmod nvme_keyring 00:27:52.636 10:23:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:52.636 10:23:05 -- nvmf/common.sh@123 -- # set -e 00:27:52.636 10:23:05 -- nvmf/common.sh@124 -- # return 0 00:27:52.636 10:23:05 -- nvmf/common.sh@477 -- # '[' -n 417630 ']' 00:27:52.636 10:23:05 -- nvmf/common.sh@478 -- # killprocess 417630 00:27:52.636 10:23:05 -- common/autotest_common.sh@926 -- # '[' -z 417630 ']' 00:27:52.636 10:23:05 -- common/autotest_common.sh@930 -- # kill -0 417630 00:27:52.636 10:23:05 -- common/autotest_common.sh@931 -- # uname 00:27:52.636 10:23:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:52.636 10:23:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 417630 00:27:52.636 10:23:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:52.636 10:23:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:52.636 10:23:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 417630' 00:27:52.637 killing process with pid 417630 00:27:52.637 10:23:05 -- common/autotest_common.sh@945 -- # kill 417630 00:27:52.637 10:23:05 -- common/autotest_common.sh@950 -- # wait 417630 00:27:53.202 10:23:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:53.202 10:23:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:53.202 10:23:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:53.203 10:23:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.203 10:23:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:53.203 10:23:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.203 10:23:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.203 10:23:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.103 10:23:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:55.103 00:27:55.103 real 0m8.211s 00:27:55.103 user 0m25.774s 00:27:55.103 sys 0m1.323s 00:27:55.103 10:23:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.103 10:23:08 -- common/autotest_common.sh@10 -- # set +x 00:27:55.103 ************************************ 00:27:55.103 END TEST nvmf_shutdown_tc2 00:27:55.103 ************************************ 00:27:55.361 10:23:08 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:55.361 10:23:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:55.361 10:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.361 10:23:08 -- common/autotest_common.sh@10 -- # set +x 00:27:55.361 ************************************ 00:27:55.361 START TEST nvmf_shutdown_tc3 00:27:55.361 ************************************ 00:27:55.361 10:23:08 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:27:55.361 10:23:08 -- target/shutdown.sh@120 -- # starttarget 00:27:55.361 10:23:08 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:55.361 10:23:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:55.361 10:23:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.361 10:23:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:55.361 10:23:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:55.361 10:23:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:55.361 10:23:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.361 10:23:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.361 10:23:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.361 10:23:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:55.361 10:23:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:55.361 10:23:08 -- common/autotest_common.sh@10 -- # set +x 00:27:55.361 10:23:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:55.361 10:23:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:55.361 10:23:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:55.361 10:23:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:55.361 10:23:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:55.361 10:23:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:55.361 10:23:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:55.361 10:23:08 -- nvmf/common.sh@294 -- # net_devs=() 00:27:55.361 10:23:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:55.361 10:23:08 -- nvmf/common.sh@295 -- # e810=() 00:27:55.361 10:23:08 -- nvmf/common.sh@295 -- # local -ga e810 00:27:55.361 10:23:08 -- nvmf/common.sh@296 -- # x722=() 00:27:55.361 10:23:08 -- nvmf/common.sh@296 -- # local -ga x722 00:27:55.361 10:23:08 -- nvmf/common.sh@297 -- # mlx=() 00:27:55.361 10:23:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:55.361 10:23:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.361 10:23:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:55.361 10:23:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:55.361 10:23:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:55.361 10:23:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:55.361 10:23:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:55.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:55.361 10:23:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:55.361 10:23:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:55.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:55.361 10:23:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:55.361 10:23:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:55.361 10:23:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.361 10:23:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:55.361 10:23:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.361 10:23:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:55.361 Found net devices under 0000:86:00.0: cvl_0_0 00:27:55.361 10:23:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.361 10:23:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:55.361 10:23:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.361 10:23:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:55.361 10:23:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.361 10:23:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:55.361 Found net devices under 0000:86:00.1: cvl_0_1 00:27:55.361 10:23:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.361 10:23:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:55.361 10:23:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:55.361 10:23:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:55.361 10:23:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:55.361 10:23:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.361 10:23:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.361 10:23:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.361 10:23:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:55.361 10:23:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.361 10:23:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.361 10:23:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:55.361 10:23:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.361 10:23:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.361 10:23:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:55.361 10:23:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:55.361 10:23:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.362 10:23:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.362 10:23:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.362 10:23:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.362 10:23:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:55.362 10:23:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.362 10:23:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.362 10:23:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.362 10:23:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:55.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:27:55.362 00:27:55.362 --- 10.0.0.2 ping statistics --- 00:27:55.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.362 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:27:55.362 10:23:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:27:55.362 00:27:55.362 --- 10.0.0.1 ping statistics --- 00:27:55.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.362 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:55.362 10:23:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.362 10:23:08 -- nvmf/common.sh@410 -- # return 0 00:27:55.362 10:23:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:55.362 10:23:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.362 10:23:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:55.362 10:23:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:55.362 10:23:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.362 10:23:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:55.362 10:23:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:55.619 10:23:08 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:55.619 10:23:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:55.619 10:23:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:55.619 10:23:08 -- common/autotest_common.sh@10 -- # set +x 00:27:55.619 10:23:08 -- nvmf/common.sh@469 -- # nvmfpid=419023 00:27:55.619 10:23:08 -- nvmf/common.sh@470 -- # waitforlisten 419023 00:27:55.620 10:23:08 -- common/autotest_common.sh@819 -- # '[' -z 419023 ']' 00:27:55.620 10:23:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.620 10:23:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:55.620 10:23:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.620 10:23:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:55.620 10:23:08 -- common/autotest_common.sh@10 -- # set +x 00:27:55.620 10:23:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:55.620 [2024-04-24 10:23:08.715260] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:27:55.620 [2024-04-24 10:23:08.715305] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.620 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.620 [2024-04-24 10:23:08.774243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:55.620 [2024-04-24 10:23:08.852391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:55.620 [2024-04-24 10:23:08.852500] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.620 [2024-04-24 10:23:08.852508] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.620 [2024-04-24 10:23:08.852514] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.620 [2024-04-24 10:23:08.852548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.620 [2024-04-24 10:23:08.852657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:55.620 [2024-04-24 10:23:08.852761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.620 [2024-04-24 10:23:08.852762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:56.550 10:23:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:56.550 10:23:09 -- common/autotest_common.sh@852 -- # return 0 00:27:56.550 10:23:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:56.550 10:23:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:56.550 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:27:56.550 10:23:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.550 10:23:09 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:56.550 10:23:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.550 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:27:56.550 [2024-04-24 10:23:09.551377] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.550 10:23:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.550 10:23:09 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:56.550 10:23:09 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:56.550 10:23:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:56.550 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:27:56.550 10:23:09 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.550 10:23:09 -- target/shutdown.sh@28 -- # cat 00:27:56.550 10:23:09 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:56.551 10:23:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.551 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:27:56.551 Malloc1 00:27:56.551 [2024-04-24 10:23:09.647372] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.551 Malloc2 00:27:56.551 Malloc3 00:27:56.551 Malloc4 00:27:56.551 Malloc5 00:27:56.807 Malloc6 00:27:56.807 Malloc7 00:27:56.807 Malloc8 00:27:56.807 Malloc9 00:27:56.807 Malloc10 00:27:56.807 10:23:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.807 10:23:10 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:56.807 10:23:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:56.807 10:23:10 -- common/autotest_common.sh@10 -- # set +x 00:27:56.807 10:23:10 -- target/shutdown.sh@124 -- # perfpid=419304 00:27:56.807 10:23:10 -- target/shutdown.sh@125 -- # waitforlisten 419304 /var/tmp/bdevperf.sock 00:27:56.807 10:23:10 -- common/autotest_common.sh@819 -- # '[' -z 419304 ']' 00:27:56.807 10:23:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:56.807 10:23:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:56.807 10:23:10 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:56.807 10:23:10 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:56.807 10:23:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:56.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:56.807 10:23:10 -- nvmf/common.sh@520 -- # config=() 00:27:56.807 10:23:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:56.807 10:23:10 -- nvmf/common.sh@520 -- # local subsystem config 00:27:56.807 10:23:10 -- common/autotest_common.sh@10 -- # set +x 00:27:56.807 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:56.807 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:56.807 { 00:27:56.807 "params": { 00:27:56.807 "name": "Nvme$subsystem", 00:27:56.807 "trtype": "$TEST_TRANSPORT", 00:27:56.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.807 "adrfam": "ipv4", 00:27:56.807 "trsvcid": "$NVMF_PORT", 00:27:56.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.807 "hdgst": ${hdgst:-false}, 00:27:56.807 "ddgst": ${ddgst:-false} 00:27:56.807 }, 00:27:56.807 "method": "bdev_nvme_attach_controller" 00:27:56.807 } 00:27:56.807 EOF 00:27:56.807 )") 00:27:56.807 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:56.807 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:56.807 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:56.807 { 00:27:56.807 "params": { 00:27:56.808 "name": "Nvme$subsystem", 00:27:56.808 "trtype": "$TEST_TRANSPORT", 00:27:56.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.808 "adrfam": "ipv4", 00:27:56.808 "trsvcid": "$NVMF_PORT", 00:27:56.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.808 "hdgst": ${hdgst:-false}, 00:27:56.808 "ddgst": ${ddgst:-false} 00:27:56.808 }, 00:27:56.808 "method": "bdev_nvme_attach_controller" 00:27:56.808 } 00:27:56.808 EOF 00:27:56.808 )") 00:27:56.808 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:57.065 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.065 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.065 { 00:27:57.065 "params": { 00:27:57.065 "name": "Nvme$subsystem", 00:27:57.065 "trtype": "$TEST_TRANSPORT", 00:27:57.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.065 "adrfam": "ipv4", 00:27:57.065 "trsvcid": "$NVMF_PORT", 00:27:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.065 "hdgst": ${hdgst:-false}, 00:27:57.065 "ddgst": ${ddgst:-false} 00:27:57.065 }, 00:27:57.065 "method": "bdev_nvme_attach_controller" 00:27:57.065 } 00:27:57.065 EOF 00:27:57.065 )") 00:27:57.065 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:57.065 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.065 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.065 { 00:27:57.065 "params": { 00:27:57.065 "name": "Nvme$subsystem", 00:27:57.065 "trtype": "$TEST_TRANSPORT", 00:27:57.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.065 "adrfam": "ipv4", 00:27:57.065 "trsvcid": "$NVMF_PORT", 00:27:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.065 "hdgst": ${hdgst:-false}, 00:27:57.065 "ddgst": ${ddgst:-false} 00:27:57.065 }, 00:27:57.065 "method": "bdev_nvme_attach_controller" 00:27:57.065 } 00:27:57.065 EOF 00:27:57.065 )") 00:27:57.065 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:57.065 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.065 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.065 { 00:27:57.065 "params": { 00:27:57.065 "name": "Nvme$subsystem", 00:27:57.065 "trtype": "$TEST_TRANSPORT", 00:27:57.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.065 "adrfam": "ipv4", 00:27:57.065 "trsvcid": "$NVMF_PORT", 00:27:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.065 "hdgst": ${hdgst:-false}, 00:27:57.065 "ddgst": ${ddgst:-false} 00:27:57.065 }, 00:27:57.065 "method": "bdev_nvme_attach_controller" 00:27:57.065 } 00:27:57.065 EOF 00:27:57.065 )") 00:27:57.065 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:57.065 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.065 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.065 { 00:27:57.065 "params": { 00:27:57.065 "name": "Nvme$subsystem", 00:27:57.065 "trtype": "$TEST_TRANSPORT", 00:27:57.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.065 "adrfam": "ipv4", 00:27:57.065 "trsvcid": "$NVMF_PORT", 00:27:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.065 "hdgst": ${hdgst:-false}, 00:27:57.065 "ddgst": ${ddgst:-false} 00:27:57.065 }, 00:27:57.065 "method": "bdev_nvme_attach_controller" 00:27:57.065 } 00:27:57.065 EOF 00:27:57.065 )") 00:27:57.065 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:57.065 [2024-04-24 10:23:10.112127] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:27:57.065 [2024-04-24 10:23:10.112176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419304 ] 00:27:57.065 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.065 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.065 { 00:27:57.065 "params": { 00:27:57.065 "name": "Nvme$subsystem", 00:27:57.065 "trtype": "$TEST_TRANSPORT", 00:27:57.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.065 "adrfam": "ipv4", 00:27:57.066 "trsvcid": "$NVMF_PORT", 00:27:57.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.066 "hdgst": ${hdgst:-false}, 00:27:57.066 "ddgst": ${ddgst:-false} 00:27:57.066 }, 00:27:57.066 "method": "bdev_nvme_attach_controller" 00:27:57.066 } 00:27:57.066 EOF 00:27:57.066 )") 00:27:57.066 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:57.066 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.066 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.066 { 00:27:57.066 "params": { 00:27:57.066 "name": "Nvme$subsystem", 00:27:57.066 "trtype": "$TEST_TRANSPORT", 00:27:57.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.066 "adrfam": "ipv4", 00:27:57.066 "trsvcid": "$NVMF_PORT", 00:27:57.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.066 "hdgst": ${hdgst:-false}, 00:27:57.066 "ddgst": ${ddgst:-false} 00:27:57.066 }, 00:27:57.066 "method": "bdev_nvme_attach_controller" 00:27:57.066 } 00:27:57.066 EOF 00:27:57.066 )") 00:27:57.066 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:57.066 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.066 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.066 { 00:27:57.066 "params": { 00:27:57.066 "name": "Nvme$subsystem", 00:27:57.066 "trtype": "$TEST_TRANSPORT", 00:27:57.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.066 "adrfam": "ipv4", 00:27:57.066 "trsvcid": "$NVMF_PORT", 00:27:57.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.066 "hdgst": ${hdgst:-false}, 00:27:57.066 "ddgst": ${ddgst:-false} 00:27:57.066 }, 00:27:57.066 "method": "bdev_nvme_attach_controller" 00:27:57.066 } 00:27:57.066 EOF 00:27:57.066 )") 00:27:57.066 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:57.066 10:23:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:57.066 10:23:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:57.066 { 00:27:57.066 "params": { 00:27:57.066 "name": "Nvme$subsystem", 00:27:57.066 "trtype": "$TEST_TRANSPORT", 00:27:57.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.066 "adrfam": "ipv4", 00:27:57.066 "trsvcid": "$NVMF_PORT", 00:27:57.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.066 "hdgst": ${hdgst:-false}, 00:27:57.066 "ddgst": ${ddgst:-false} 00:27:57.066 }, 00:27:57.066 "method": "bdev_nvme_attach_controller" 00:27:57.066 } 00:27:57.066 EOF 00:27:57.066 )") 00:27:57.066 10:23:10 -- nvmf/common.sh@542 -- # cat 00:27:57.066 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.066 10:23:10 -- nvmf/common.sh@544 -- # jq . 00:27:57.066 10:23:10 -- nvmf/common.sh@545 -- # IFS=, 00:27:57.066 10:23:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:57.066 "params": { 00:27:57.066 "name": "Nvme1", 00:27:57.066 "trtype": "tcp", 00:27:57.066 "traddr": "10.0.0.2", 00:27:57.066 "adrfam": "ipv4", 00:27:57.066 "trsvcid": "4420", 00:27:57.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:57.066 "hdgst": false, 00:27:57.066 "ddgst": false 00:27:57.066 }, 00:27:57.066 "method": "bdev_nvme_attach_controller" 00:27:57.066 },{ 00:27:57.066 "params": { 00:27:57.066 "name": "Nvme2", 00:27:57.066 "trtype": "tcp", 00:27:57.066 "traddr": "10.0.0.2", 00:27:57.066 "adrfam": "ipv4", 00:27:57.066 "trsvcid": "4420", 00:27:57.066 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:57.066 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:57.066 "hdgst": false, 00:27:57.066 "ddgst": false 00:27:57.066 }, 00:27:57.066 "method": "bdev_nvme_attach_controller" 00:27:57.066 },{ 00:27:57.066 "params": { 00:27:57.066 "name": "Nvme3", 00:27:57.066 "trtype": "tcp", 00:27:57.066 "traddr": "10.0.0.2", 00:27:57.066 "adrfam": "ipv4", 00:27:57.066 "trsvcid": "4420", 00:27:57.066 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:57.066 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:57.066 "hdgst": false, 00:27:57.066 "ddgst": false 00:27:57.066 }, 00:27:57.066 "method": "bdev_nvme_attach_controller" 00:27:57.066 },{ 00:27:57.066 "params": { 00:27:57.066 "name": "Nvme4", 00:27:57.066 "trtype": "tcp", 00:27:57.066 "traddr": "10.0.0.2", 00:27:57.066 "adrfam": "ipv4", 00:27:57.066 "trsvcid": "4420", 00:27:57.066 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:57.066 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:57.066 "hdgst": false, 00:27:57.066 "ddgst": false 00:27:57.066 }, 00:27:57.066 "method": "bdev_nvme_attach_controller" 00:27:57.066 },{ 00:27:57.066 "params": { 00:27:57.066 "name": "Nvme5", 00:27:57.066 "trtype": "tcp", 00:27:57.066 "traddr": "10.0.0.2", 00:27:57.066 "adrfam": "ipv4", 00:27:57.067 "trsvcid": "4420", 00:27:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:57.067 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:57.067 "hdgst": false, 00:27:57.067 "ddgst": false 00:27:57.067 }, 00:27:57.067 "method": "bdev_nvme_attach_controller" 00:27:57.067 },{ 00:27:57.067 "params": { 00:27:57.067 "name": "Nvme6", 00:27:57.067 "trtype": "tcp", 00:27:57.067 "traddr": "10.0.0.2", 00:27:57.067 "adrfam": "ipv4", 00:27:57.067 "trsvcid": "4420", 00:27:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:57.067 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:57.067 "hdgst": false, 00:27:57.067 "ddgst": false 00:27:57.067 }, 00:27:57.067 "method": "bdev_nvme_attach_controller" 00:27:57.067 },{ 00:27:57.067 "params": { 00:27:57.067 "name": "Nvme7", 00:27:57.067 "trtype": "tcp", 00:27:57.067 "traddr": "10.0.0.2", 00:27:57.067 "adrfam": "ipv4", 00:27:57.067 "trsvcid": "4420", 00:27:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:57.067 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:57.067 "hdgst": false, 00:27:57.067 "ddgst": false 00:27:57.067 }, 00:27:57.067 "method": "bdev_nvme_attach_controller" 00:27:57.067 },{ 00:27:57.067 "params": { 00:27:57.067 "name": "Nvme8", 00:27:57.067 "trtype": "tcp", 00:27:57.067 "traddr": "10.0.0.2", 00:27:57.067 "adrfam": "ipv4", 00:27:57.067 "trsvcid": "4420", 00:27:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:57.067 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:57.067 "hdgst": false, 00:27:57.067 "ddgst": false 00:27:57.067 }, 00:27:57.067 "method": "bdev_nvme_attach_controller" 00:27:57.067 },{ 00:27:57.067 "params": { 00:27:57.067 "name": "Nvme9", 00:27:57.067 "trtype": "tcp", 00:27:57.067 "traddr": "10.0.0.2", 00:27:57.067 "adrfam": "ipv4", 00:27:57.067 "trsvcid": "4420", 00:27:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:57.067 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:57.067 "hdgst": false, 00:27:57.067 "ddgst": false 00:27:57.067 }, 00:27:57.067 "method": "bdev_nvme_attach_controller" 00:27:57.067 },{ 00:27:57.067 "params": { 00:27:57.067 "name": "Nvme10", 00:27:57.067 "trtype": "tcp", 00:27:57.067 "traddr": "10.0.0.2", 00:27:57.067 "adrfam": "ipv4", 00:27:57.067 "trsvcid": "4420", 00:27:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:57.067 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:57.067 "hdgst": false, 00:27:57.067 "ddgst": false 00:27:57.067 }, 00:27:57.067 "method": "bdev_nvme_attach_controller" 00:27:57.067 }' 00:27:57.067 [2024-04-24 10:23:10.168144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.067 [2024-04-24 10:23:10.246419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.961 Running I/O for 10 seconds... 00:27:59.238 10:23:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:59.238 10:23:12 -- common/autotest_common.sh@852 -- # return 0 00:27:59.239 10:23:12 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:59.239 10:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.239 10:23:12 -- common/autotest_common.sh@10 -- # set +x 00:27:59.239 10:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.239 10:23:12 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:59.239 10:23:12 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:59.239 10:23:12 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:59.239 10:23:12 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:59.239 10:23:12 -- target/shutdown.sh@57 -- # local ret=1 00:27:59.239 10:23:12 -- target/shutdown.sh@58 -- # local i 00:27:59.239 10:23:12 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:59.239 10:23:12 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:59.239 10:23:12 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:59.239 10:23:12 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:59.239 10:23:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.239 10:23:12 -- common/autotest_common.sh@10 -- # set +x 00:27:59.239 10:23:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.239 10:23:12 -- target/shutdown.sh@60 -- # read_io_count=167 00:27:59.239 10:23:12 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:27:59.239 10:23:12 -- target/shutdown.sh@64 -- # ret=0 00:27:59.239 10:23:12 -- target/shutdown.sh@65 -- # break 00:27:59.239 10:23:12 -- target/shutdown.sh@69 -- # return 0 00:27:59.239 10:23:12 -- target/shutdown.sh@134 -- # killprocess 419023 00:27:59.239 10:23:12 -- common/autotest_common.sh@926 -- # '[' -z 419023 ']' 00:27:59.239 10:23:12 -- common/autotest_common.sh@930 -- # kill -0 419023 00:27:59.239 10:23:12 -- common/autotest_common.sh@931 -- # uname 00:27:59.239 10:23:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:59.239 10:23:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 419023 00:27:59.239 10:23:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:59.239 10:23:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:59.239 10:23:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 419023' 00:27:59.239 killing process with pid 419023 00:27:59.239 10:23:12 -- common/autotest_common.sh@945 -- # kill 419023 00:27:59.239 10:23:12 -- common/autotest_common.sh@950 -- # wait 419023 00:27:59.239 [2024-04-24 10:23:12.419350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.239 [2024-04-24 10:23:12.419531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.419801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ff900 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.240 [2024-04-24 10:23:12.420990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.420997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.241 [2024-04-24 10:23:12.421284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.421290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.421298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.421305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.421311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.421317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.421323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.421329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602050 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.242 [2024-04-24 10:23:12.424521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.424649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ffdb0 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.243 [2024-04-24 10:23:12.428698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.428866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600260 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.429701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2057710 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.429830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbb660 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.429922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.429971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.429977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205a470 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.430006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.430014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.430031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.430044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.244 [2024-04-24 10:23:12.430058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20726b0 is same with the state(5) to be set 00:27:59.244 [2024-04-24 10:23:12.430172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.244 [2024-04-24 10:23:12.430414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.244 [2024-04-24 10:23:12.430420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.245 [2024-04-24 10:23:12.430887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.245 [2024-04-24 10:23:12.430895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.430904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.430910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.430918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.430926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.430933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.430940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.430948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.430955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.430963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.430969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.430977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.430984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.430992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.430999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.431014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.431029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.431043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.431058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.431079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.431094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.431109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.431125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.246 [2024-04-24 10:23:12.431141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.246 [2024-04-24 10:23:12.431223] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2047b80 was disconnected and freed. reset controller. 00:27:59.246 [2024-04-24 10:23:12.431409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16006f0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.431828] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:59.246 [2024-04-24 10:23:12.433303] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:59.246 [2024-04-24 10:23:12.433330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:59.246 [2024-04-24 10:23:12.433347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbb660 (9): Bad file descriptor 00:27:59.246 [2024-04-24 10:23:12.434080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.246 [2024-04-24 10:23:12.434252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600ba0 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.434629] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:59.247 [2024-04-24 10:23:12.435008] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:59.247 [2024-04-24 10:23:12.436148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1600f20 is same with the state(5) to be set 00:27:59.247 [2024-04-24 10:23:12.436587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.247 [2024-04-24 10:23:12.436886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.247 [2024-04-24 10:23:12.436893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.436901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.436907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.436915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.436921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.436929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.436936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.436944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.436951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.436959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.436967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.436976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.436982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.436991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.436997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with [2024-04-24 10:23:12.437269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:59.248 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32384 len:12[2024-04-24 10:23:12.437297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 10:23:12.437306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32768 len:1[2024-04-24 10:23:12.437357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with [2024-04-24 10:23:12.437366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:59.248 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 [2024-04-24 10:23:12.437382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.248 [2024-04-24 10:23:12.437389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.248 [2024-04-24 10:23:12.437396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33024 len:1[2024-04-24 10:23:12.437396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.248 the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 10:23:12.437406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33280 len:12[2024-04-24 10:23:12.437436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33536 len:12[2024-04-24 10:23:12.437471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 10:23:12.437480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6020 is same [2024-04-24 10:23:12.437646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with with the state(5) to be set 00:27:59.249 the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437699] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a6020 was disconnected and fr[2024-04-24 10:23:12.437700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with eed. reset controller. 00:27:59.249 the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 10:23:12.437736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16013d0 is same with the state(5) to be set 00:27:59.249 [2024-04-24 10:23:12.437747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.437986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.437992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.438000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.438007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.249 [2024-04-24 10:23:12.438015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.249 [2024-04-24 10:23:12.438021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.438556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.438663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.438769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.438821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.438880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.438929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.438986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.439036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.439146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.439261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.439359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.439427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.439493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.439560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.439631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.439697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.439764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.439839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.439906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.439942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.439973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.440041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.440116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.440183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.440256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.440325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.440403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.440471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.440537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.440604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.440675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.440741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.440809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.440880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.440949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.440986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.441021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.441091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.441153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.441222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.441285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.441347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.441410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.441474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.441537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.441604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.250 [2024-04-24 10:23:12.441667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.250 [2024-04-24 10:23:12.441729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.250 [2024-04-24 10:23:12.441762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.251 [2024-04-24 10:23:12.441791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.441822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.441854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.441887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.251 [2024-04-24 10:23:12.441916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.441947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.441980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.251 [2024-04-24 10:23:12.442047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.442114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.251 [2024-04-24 10:23:12.442176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.442238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.251 [2024-04-24 10:23:12.442302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.442369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442465] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2100710 was disconnected and fr[2024-04-24 10:23:12.442467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with eed. reset controller. 00:27:59.251 the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.251 [2024-04-24 10:23:12.442693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.442756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601860 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.442788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.251 [2024-04-24 10:23:12.442850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.442882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.251 [2024-04-24 10:23:12.442913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.442945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.251 [2024-04-24 10:23:12.442980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.443012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f70 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2057710 (9): Bad file descriptor 00:27:59.251 [2024-04-24 10:23:12.443108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.251 [2024-04-24 10:23:12.443126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.443163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.251 [2024-04-24 10:23:12.443195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.443228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.251 [2024-04-24 10:23:12.443259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.443291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.251 [2024-04-24 10:23:12.443322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.251 [2024-04-24 10:23:12.443385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.443992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.251 [2024-04-24 10:23:12.444937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.444969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.445302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601cf0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.453725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b5e0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.453756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.453777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.453795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.453813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.453831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb640 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.453862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.453883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.453901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.453919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.453937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b8e0 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.453965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.453986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.453999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.454008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.454017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.454026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.454035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.454044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2073430 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.454061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205a470 (9): Bad file descriptor 00:27:59.252 [2024-04-24 10:23:12.454096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.454107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.454117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.454125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.454135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.454144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.454154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.252 [2024-04-24 10:23:12.454162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.454171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b160 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.454189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20726b0 (9): Bad file descriptor 00:27:59.252 [2024-04-24 10:23:12.456870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:59.252 [2024-04-24 10:23:12.456902] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:59.252 [2024-04-24 10:23:12.456917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2078f70 (9): Bad file descriptor 00:27:59.252 [2024-04-24 10:23:12.456930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b5e0 (9): Bad file descriptor 00:27:59.252 [2024-04-24 10:23:12.456962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:59.252 [2024-04-24 10:23:12.456974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.456983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbb660 is same with the state(5) to be set 00:27:59.252 [2024-04-24 10:23:12.456995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbb660 (9): Bad file descriptor 00:27:59.252 [2024-04-24 10:23:12.457032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eb640 (9): Bad file descriptor 00:27:59.252 [2024-04-24 10:23:12.457051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211b8e0 (9): Bad file descriptor 00:27:59.252 [2024-04-24 10:23:12.457075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2073430 (9): Bad file descriptor 00:27:59.252 [2024-04-24 10:23:12.457099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211b160 (9): Bad file descriptor 00:27:59.252 [2024-04-24 10:23:12.457405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.457438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.457460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.457480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.457500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.457520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.457540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.457560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.457580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.252 [2024-04-24 10:23:12.457600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.252 [2024-04-24 10:23:12.457609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.457985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.457994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.458705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.458715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214280 is same with the state(5) to be set 00:27:59.253 [2024-04-24 10:23:12.460061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.460082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.460096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.460105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.460117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.253 [2024-04-24 10:23:12.460127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.253 [2024-04-24 10:23:12.460140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.460979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.460988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.254 [2024-04-24 10:23:12.461303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.254 [2024-04-24 10:23:12.461314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.461325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.461337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.461346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.461357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.461367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.461378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.461387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.461399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.461408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.461418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20482c0 is same with the state(5) to be set 00:27:59.255 [2024-04-24 10:23:12.463576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.463982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.463993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.255 [2024-04-24 10:23:12.464269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.255 [2024-04-24 10:23:12.464280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.464928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.464938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x32526d0 is same with the state(5) to be set 00:27:59.256 [2024-04-24 10:23:12.468155] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.256 [2024-04-24 10:23:12.468183] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:59.256 [2024-04-24 10:23:12.468195] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:59.256 [2024-04-24 10:23:12.468568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.468848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.468862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b5e0 with addr=10.0.0.2, port=4420 00:27:59.256 [2024-04-24 10:23:12.468872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b5e0 is same with the state(5) to be set 00:27:59.256 [2024-04-24 10:23:12.469098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.469313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.469327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2078f70 with addr=10.0.0.2, port=4420 00:27:59.256 [2024-04-24 10:23:12.469336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f70 is same with the state(5) to be set 00:27:59.256 [2024-04-24 10:23:12.469346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:59.256 [2024-04-24 10:23:12.469355] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:59.256 [2024-04-24 10:23:12.469365] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:59.256 [2024-04-24 10:23:12.469403] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.256 [2024-04-24 10:23:12.469449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2078f70 (9): Bad file descriptor 00:27:59.256 [2024-04-24 10:23:12.469476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b5e0 (9): Bad file descriptor 00:27:59.256 [2024-04-24 10:23:12.469578] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:59.256 [2024-04-24 10:23:12.469698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.256 [2024-04-24 10:23:12.469951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.470083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.470096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2057710 with addr=10.0.0.2, port=4420 00:27:59.256 [2024-04-24 10:23:12.470104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2057710 is same with the state(5) to be set 00:27:59.256 [2024-04-24 10:23:12.470404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.470546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.470558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205a470 with addr=10.0.0.2, port=4420 00:27:59.256 [2024-04-24 10:23:12.470566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205a470 is same with the state(5) to be set 00:27:59.256 [2024-04-24 10:23:12.470730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.470953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.256 [2024-04-24 10:23:12.470965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20726b0 with addr=10.0.0.2, port=4420 00:27:59.256 [2024-04-24 10:23:12.470972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20726b0 is same with the state(5) to be set 00:27:59.256 [2024-04-24 10:23:12.471554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.256 [2024-04-24 10:23:12.471855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.256 [2024-04-24 10:23:12.471862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.471872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.471880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.471890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.471900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.471910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.471919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.471928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.471936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.471946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.471954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.471964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.471972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.471982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.471990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.471999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.472714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.472723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a4b70 is same with the state(5) to be set 00:27:59.257 [2024-04-24 10:23:12.473916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.473931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.473943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.473952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.473961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.473969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.473979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.473987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.474001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.474008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.474018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.474026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.474036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.474044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.474054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.474062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.474080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.474088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.257 [2024-04-24 10:23:12.474098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.257 [2024-04-24 10:23:12.474106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.258 [2024-04-24 10:23:12.474764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.258 [2024-04-24 10:23:12.474772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.474989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.474999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.475007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.475017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.475025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.475034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.475042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.475052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.475060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.475074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.475082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.475092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101cf0 is same with the state(5) to be set 00:27:59.259 [2024-04-24 10:23:12.476266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.476978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.476990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.477000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.477008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.259 [2024-04-24 10:23:12.477018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.259 [2024-04-24 10:23:12.477026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.477414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.477422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.478982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.478992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.260 [2024-04-24 10:23:12.479346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.260 [2024-04-24 10:23:12.479364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.479677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.261 [2024-04-24 10:23:12.479685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.261 [2024-04-24 10:23:12.482274] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:59.261 [2024-04-24 10:23:12.482301] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:59.261 [2024-04-24 10:23:12.482310] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:59.261 task offset: 29184 on job bdev=Nvme2n1 fails 00:27:59.261 00:27:59.261 Latency(us) 00:27:59.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.261 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme1n1 ended in about 0.54 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme1n1 : 0.54 387.33 24.21 119.18 0.00 125331.31 62914.56 130388.15 00:27:59.261 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme2n1 ended in about 0.51 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme2n1 : 0.51 407.74 25.48 125.46 0.00 117473.28 17666.23 125829.12 00:27:59.261 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme3n1 ended in about 0.54 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme3n1 : 0.54 385.39 24.09 118.58 0.00 122880.84 71576.71 94827.74 00:27:59.261 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme4n1 ended in about 0.55 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme4n1 : 0.55 377.53 23.60 116.16 0.00 123997.13 72488.51 107137.11 00:27:59.261 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme5n1 ended in about 0.53 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme5n1 : 0.53 390.58 24.41 120.18 0.00 118154.20 67929.49 94371.84 00:27:59.261 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme6n1 ended in about 0.53 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme6n1 : 0.53 389.69 24.36 119.91 0.00 116934.83 33508.84 101666.28 00:27:59.261 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme7n1 ended in about 0.55 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme7n1 : 0.55 375.92 23.50 115.67 0.00 120019.71 68841.29 107593.02 00:27:59.261 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme8n1 ended in about 0.56 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme8n1 : 0.56 374.35 23.40 115.18 0.00 119025.78 72032.61 97563.16 00:27:59.261 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme9n1 ended in about 0.56 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme9n1 : 0.56 372.86 23.30 114.73 0.00 118015.08 50149.29 103033.99 00:27:59.261 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:59.261 Job: Nvme10n1 ended in about 0.54 seconds with error 00:27:59.261 Verification LBA range: start 0x0 length 0x400 00:27:59.261 Nvme10n1 : 0.54 301.90 18.87 117.81 0.00 134863.19 81150.66 115343.36 00:27:59.261 =================================================================================================================== 00:27:59.261 Total : 3763.30 235.21 1182.86 0.00 121452.60 17666.23 130388.15 00:27:59.521 [2024-04-24 10:23:12.509819] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:59.521 [2024-04-24 10:23:12.509867] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:59.521 [2024-04-24 10:23:12.509925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2057710 (9): Bad file descriptor 00:27:59.521 [2024-04-24 10:23:12.509940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205a470 (9): Bad file descriptor 00:27:59.521 [2024-04-24 10:23:12.509950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20726b0 (9): Bad file descriptor 00:27:59.521 [2024-04-24 10:23:12.509960] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:59.521 [2024-04-24 10:23:12.509968] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:59.521 [2024-04-24 10:23:12.509976] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:59.521 [2024-04-24 10:23:12.509992] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:59.521 [2024-04-24 10:23:12.510000] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:59.521 [2024-04-24 10:23:12.510007] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:59.521 [2024-04-24 10:23:12.510053] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.521 [2024-04-24 10:23:12.510066] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.521 [2024-04-24 10:23:12.510083] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.521 [2024-04-24 10:23:12.510095] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.521 [2024-04-24 10:23:12.510106] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.521 [2024-04-24 10:23:12.510214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.521 [2024-04-24 10:23:12.510224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.521 [2024-04-24 10:23:12.510559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.510796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.510809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2073430 with addr=10.0.0.2, port=4420 00:27:59.521 [2024-04-24 10:23:12.510819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2073430 is same with the state(5) to be set 00:27:59.521 [2024-04-24 10:23:12.510989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.511203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.511216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211b8e0 with addr=10.0.0.2, port=4420 00:27:59.521 [2024-04-24 10:23:12.511224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b8e0 is same with the state(5) to be set 00:27:59.521 [2024-04-24 10:23:12.511447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.511601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.511612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211b160 with addr=10.0.0.2, port=4420 00:27:59.521 [2024-04-24 10:23:12.511620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b160 is same with the state(5) to be set 00:27:59.521 [2024-04-24 10:23:12.511780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.511927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.511941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20eb640 with addr=10.0.0.2, port=4420 00:27:59.521 [2024-04-24 10:23:12.511949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb640 is same with the state(5) to be set 00:27:59.521 [2024-04-24 10:23:12.511957] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:59.521 [2024-04-24 10:23:12.511964] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:59.521 [2024-04-24 10:23:12.511973] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.521 [2024-04-24 10:23:12.511986] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:59.521 [2024-04-24 10:23:12.511993] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:59.521 [2024-04-24 10:23:12.512000] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:59.521 [2024-04-24 10:23:12.512013] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:59.521 [2024-04-24 10:23:12.512020] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:59.521 [2024-04-24 10:23:12.512027] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:59.521 [2024-04-24 10:23:12.512054] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.521 [2024-04-24 10:23:12.512081] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.521 [2024-04-24 10:23:12.512093] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.521 [2024-04-24 10:23:12.512103] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:59.521 [2024-04-24 10:23:12.513204] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:59.521 [2024-04-24 10:23:12.513233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.521 [2024-04-24 10:23:12.513241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.521 [2024-04-24 10:23:12.513247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.521 [2024-04-24 10:23:12.513272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2073430 (9): Bad file descriptor 00:27:59.521 [2024-04-24 10:23:12.513285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211b8e0 (9): Bad file descriptor 00:27:59.521 [2024-04-24 10:23:12.513295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211b160 (9): Bad file descriptor 00:27:59.521 [2024-04-24 10:23:12.513305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eb640 (9): Bad file descriptor 00:27:59.521 [2024-04-24 10:23:12.513362] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:59.521 [2024-04-24 10:23:12.513374] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:59.521 [2024-04-24 10:23:12.513695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.513988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.521 [2024-04-24 10:23:12.514000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbb660 with addr=10.0.0.2, port=4420 00:27:59.521 [2024-04-24 10:23:12.514008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbb660 is same with the state(5) to be set 00:27:59.521 [2024-04-24 10:23:12.514017] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:59.522 [2024-04-24 10:23:12.514027] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:59.522 [2024-04-24 10:23:12.514035] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:59.522 [2024-04-24 10:23:12.514045] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:59.522 [2024-04-24 10:23:12.514053] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:59.522 [2024-04-24 10:23:12.514060] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:59.522 [2024-04-24 10:23:12.514090] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:59.522 [2024-04-24 10:23:12.514098] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:59.522 [2024-04-24 10:23:12.514105] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:59.522 [2024-04-24 10:23:12.514115] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:59.522 [2024-04-24 10:23:12.514122] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:59.522 [2024-04-24 10:23:12.514129] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:59.522 [2024-04-24 10:23:12.514187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.522 [2024-04-24 10:23:12.514196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.522 [2024-04-24 10:23:12.514202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.522 [2024-04-24 10:23:12.514209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.522 [2024-04-24 10:23:12.514427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.522 [2024-04-24 10:23:12.514713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.522 [2024-04-24 10:23:12.514724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2078f70 with addr=10.0.0.2, port=4420 00:27:59.522 [2024-04-24 10:23:12.514733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f70 is same with the state(5) to be set 00:27:59.522 [2024-04-24 10:23:12.514931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.522 [2024-04-24 10:23:12.515199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.522 [2024-04-24 10:23:12.515210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b5e0 with addr=10.0.0.2, port=4420 00:27:59.522 [2024-04-24 10:23:12.515219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b5e0 is same with the state(5) to be set 00:27:59.522 [2024-04-24 10:23:12.515228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbb660 (9): Bad file descriptor 00:27:59.522 [2024-04-24 10:23:12.515259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2078f70 (9): Bad file descriptor 00:27:59.522 [2024-04-24 10:23:12.515269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b5e0 (9): Bad file descriptor 00:27:59.522 [2024-04-24 10:23:12.515278] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:59.522 [2024-04-24 10:23:12.515285] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:59.522 [2024-04-24 10:23:12.515292] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:59.522 [2024-04-24 10:23:12.515332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.522 [2024-04-24 10:23:12.515341] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:59.522 [2024-04-24 10:23:12.515351] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:59.522 [2024-04-24 10:23:12.515359] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:59.522 [2024-04-24 10:23:12.515368] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:59.522 [2024-04-24 10:23:12.515375] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:59.522 [2024-04-24 10:23:12.515382] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:59.522 [2024-04-24 10:23:12.515408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.522 [2024-04-24 10:23:12.515416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.780 10:23:12 -- target/shutdown.sh@135 -- # nvmfpid= 00:27:59.780 10:23:12 -- target/shutdown.sh@138 -- # sleep 1 00:28:00.714 10:23:13 -- target/shutdown.sh@141 -- # kill -9 419304 00:28:00.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (419304) - No such process 00:28:00.714 10:23:13 -- target/shutdown.sh@141 -- # true 00:28:00.714 10:23:13 -- target/shutdown.sh@143 -- # stoptarget 00:28:00.714 10:23:13 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:00.714 10:23:13 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:00.714 10:23:13 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:00.714 10:23:13 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:00.714 10:23:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:00.714 10:23:13 -- nvmf/common.sh@116 -- # sync 00:28:00.714 10:23:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:00.714 10:23:13 -- nvmf/common.sh@119 -- # set +e 00:28:00.714 10:23:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:00.714 10:23:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:00.714 rmmod nvme_tcp 00:28:00.714 rmmod nvme_fabrics 00:28:00.714 rmmod nvme_keyring 00:28:00.714 10:23:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:00.714 10:23:13 -- nvmf/common.sh@123 -- # set -e 00:28:00.714 10:23:13 -- nvmf/common.sh@124 -- # return 0 00:28:00.714 10:23:13 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:28:00.714 10:23:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:00.714 10:23:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:00.714 10:23:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:00.714 10:23:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.714 10:23:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:00.714 10:23:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.714 10:23:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.714 10:23:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.244 10:23:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:03.244 00:28:03.244 real 0m7.642s 00:28:03.244 user 0m18.728s 00:28:03.244 sys 0m1.233s 00:28:03.244 10:23:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.244 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:28:03.244 ************************************ 00:28:03.244 END TEST nvmf_shutdown_tc3 00:28:03.244 ************************************ 00:28:03.244 10:23:16 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:28:03.244 00:28:03.244 real 0m30.681s 00:28:03.244 user 1m18.779s 00:28:03.244 sys 0m7.916s 00:28:03.244 10:23:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.244 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:28:03.244 ************************************ 00:28:03.244 END TEST nvmf_shutdown 00:28:03.244 ************************************ 00:28:03.244 10:23:16 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:28:03.244 10:23:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:03.244 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:28:03.244 10:23:16 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:28:03.244 10:23:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:03.244 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:28:03.244 10:23:16 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:28:03.244 10:23:16 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:03.244 10:23:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:03.244 10:23:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:03.244 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:28:03.244 ************************************ 00:28:03.244 START TEST nvmf_multicontroller 00:28:03.244 ************************************ 00:28:03.244 10:23:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:03.244 * Looking for test storage... 00:28:03.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.244 10:23:16 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.244 10:23:16 -- nvmf/common.sh@7 -- # uname -s 00:28:03.244 10:23:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.244 10:23:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.244 10:23:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.244 10:23:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.244 10:23:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.244 10:23:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.244 10:23:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.244 10:23:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.244 10:23:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.244 10:23:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.244 10:23:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.244 10:23:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.244 10:23:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.244 10:23:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.244 10:23:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.244 10:23:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.244 10:23:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.244 10:23:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.244 10:23:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.244 10:23:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.244 10:23:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.244 10:23:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.244 10:23:16 -- paths/export.sh@5 -- # export PATH 00:28:03.244 10:23:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.244 10:23:16 -- nvmf/common.sh@46 -- # : 0 00:28:03.244 10:23:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:03.244 10:23:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:03.244 10:23:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:03.244 10:23:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.244 10:23:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.244 10:23:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:03.244 10:23:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:03.244 10:23:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:03.244 10:23:16 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:03.244 10:23:16 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:03.244 10:23:16 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:03.244 10:23:16 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:03.244 10:23:16 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:03.244 10:23:16 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:03.244 10:23:16 -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:03.244 10:23:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:03.244 10:23:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.244 10:23:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:03.244 10:23:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:03.244 10:23:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:03.244 10:23:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.244 10:23:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.244 10:23:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.244 10:23:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:03.244 10:23:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:03.244 10:23:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:03.244 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:28:08.531 10:23:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:08.531 10:23:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:08.531 10:23:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:08.531 10:23:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:08.531 10:23:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:08.531 10:23:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:08.531 10:23:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:08.531 10:23:21 -- nvmf/common.sh@294 -- # net_devs=() 00:28:08.531 10:23:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:08.531 10:23:21 -- nvmf/common.sh@295 -- # e810=() 00:28:08.531 10:23:21 -- nvmf/common.sh@295 -- # local -ga e810 00:28:08.531 10:23:21 -- nvmf/common.sh@296 -- # x722=() 00:28:08.531 10:23:21 -- nvmf/common.sh@296 -- # local -ga x722 00:28:08.531 10:23:21 -- nvmf/common.sh@297 -- # mlx=() 00:28:08.531 10:23:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:08.531 10:23:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.531 10:23:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.531 10:23:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.531 10:23:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.531 10:23:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.531 10:23:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.532 10:23:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.532 10:23:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.532 10:23:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.532 10:23:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.532 10:23:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.532 10:23:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:08.532 10:23:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:08.532 10:23:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:08.532 10:23:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:08.532 10:23:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:08.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:08.532 10:23:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:08.532 10:23:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:08.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:08.532 10:23:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:08.532 10:23:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:08.532 10:23:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.532 10:23:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:08.532 10:23:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.532 10:23:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:08.532 Found net devices under 0000:86:00.0: cvl_0_0 00:28:08.532 10:23:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.532 10:23:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:08.532 10:23:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.532 10:23:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:08.532 10:23:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.532 10:23:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:08.532 Found net devices under 0000:86:00.1: cvl_0_1 00:28:08.532 10:23:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.532 10:23:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:08.532 10:23:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:08.532 10:23:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:08.532 10:23:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.532 10:23:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.532 10:23:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.532 10:23:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:08.532 10:23:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.532 10:23:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.532 10:23:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:08.532 10:23:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.532 10:23:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.532 10:23:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:08.532 10:23:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:08.532 10:23:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.532 10:23:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.532 10:23:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.532 10:23:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.532 10:23:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:08.532 10:23:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.532 10:23:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.532 10:23:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.532 10:23:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:08.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:28:08.532 00:28:08.532 --- 10.0.0.2 ping statistics --- 00:28:08.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.532 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:28:08.532 10:23:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:28:08.532 00:28:08.532 --- 10.0.0.1 ping statistics --- 00:28:08.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.532 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:28:08.532 10:23:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.532 10:23:21 -- nvmf/common.sh@410 -- # return 0 00:28:08.532 10:23:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:08.532 10:23:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.532 10:23:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:08.532 10:23:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.532 10:23:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:08.532 10:23:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:08.532 10:23:21 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:08.532 10:23:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:08.532 10:23:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:08.532 10:23:21 -- common/autotest_common.sh@10 -- # set +x 00:28:08.532 10:23:21 -- nvmf/common.sh@469 -- # nvmfpid=423446 00:28:08.532 10:23:21 -- nvmf/common.sh@470 -- # waitforlisten 423446 00:28:08.532 10:23:21 -- common/autotest_common.sh@819 -- # '[' -z 423446 ']' 00:28:08.532 10:23:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.532 10:23:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:08.532 10:23:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.532 10:23:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:08.532 10:23:21 -- common/autotest_common.sh@10 -- # set +x 00:28:08.532 10:23:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:08.532 [2024-04-24 10:23:21.594669] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:08.532 [2024-04-24 10:23:21.594710] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.532 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.532 [2024-04-24 10:23:21.654036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:08.532 [2024-04-24 10:23:21.731614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:08.532 [2024-04-24 10:23:21.731722] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.532 [2024-04-24 10:23:21.731730] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.532 [2024-04-24 10:23:21.731736] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.532 [2024-04-24 10:23:21.731769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.532 [2024-04-24 10:23:21.731855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.532 [2024-04-24 10:23:21.731856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.462 10:23:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:09.462 10:23:22 -- common/autotest_common.sh@852 -- # return 0 00:28:09.462 10:23:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:09.462 10:23:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:09.462 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.462 10:23:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.462 10:23:22 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.462 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.462 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.462 [2024-04-24 10:23:22.445321] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.462 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.462 10:23:22 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:09.462 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.462 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.462 Malloc0 00:28:09.462 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.462 10:23:22 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:09.462 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.462 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.462 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.462 10:23:22 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:09.462 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.462 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.462 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.462 10:23:22 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.462 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.462 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.462 [2024-04-24 10:23:22.505969] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.462 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.462 10:23:22 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:09.462 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.462 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.462 [2024-04-24 10:23:22.513913] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:09.462 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.462 10:23:22 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:09.462 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.462 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.462 Malloc1 00:28:09.463 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.463 10:23:22 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:09.463 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.463 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.463 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.463 10:23:22 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:09.463 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.463 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.463 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.463 10:23:22 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:09.463 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.463 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.463 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.463 10:23:22 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:09.463 10:23:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.463 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:09.463 10:23:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.463 10:23:22 -- host/multicontroller.sh@44 -- # bdevperf_pid=423613 00:28:09.463 10:23:22 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.463 10:23:22 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:09.463 10:23:22 -- host/multicontroller.sh@47 -- # waitforlisten 423613 /var/tmp/bdevperf.sock 00:28:09.463 10:23:22 -- common/autotest_common.sh@819 -- # '[' -z 423613 ']' 00:28:09.463 10:23:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:09.463 10:23:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:09.463 10:23:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:09.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:09.463 10:23:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:09.463 10:23:22 -- common/autotest_common.sh@10 -- # set +x 00:28:10.395 10:23:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:10.395 10:23:23 -- common/autotest_common.sh@852 -- # return 0 00:28:10.395 10:23:23 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:10.395 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.395 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.395 NVMe0n1 00:28:10.395 10:23:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.395 10:23:23 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.395 10:23:23 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:10.395 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.395 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.395 10:23:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.395 1 00:28:10.395 10:23:23 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:10.395 10:23:23 -- common/autotest_common.sh@640 -- # local es=0 00:28:10.395 10:23:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:10.395 10:23:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:10.395 10:23:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:10.395 10:23:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:10.395 10:23:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:10.395 10:23:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:10.395 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.395 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.395 request: 00:28:10.395 { 00:28:10.395 "name": "NVMe0", 00:28:10.395 "trtype": "tcp", 00:28:10.395 "traddr": "10.0.0.2", 00:28:10.395 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:10.395 "hostaddr": "10.0.0.2", 00:28:10.395 "hostsvcid": "60000", 00:28:10.395 "adrfam": "ipv4", 00:28:10.395 "trsvcid": "4420", 00:28:10.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.395 "method": "bdev_nvme_attach_controller", 00:28:10.395 "req_id": 1 00:28:10.395 } 00:28:10.395 Got JSON-RPC error response 00:28:10.395 response: 00:28:10.395 { 00:28:10.395 "code": -114, 00:28:10.395 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:10.395 } 00:28:10.395 10:23:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:10.395 10:23:23 -- common/autotest_common.sh@643 -- # es=1 00:28:10.395 10:23:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:10.395 10:23:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:10.395 10:23:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:10.395 10:23:23 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:10.395 10:23:23 -- common/autotest_common.sh@640 -- # local es=0 00:28:10.395 10:23:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:10.395 10:23:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:10.395 10:23:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:10.395 10:23:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:10.395 10:23:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:10.395 10:23:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:10.395 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.395 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.395 request: 00:28:10.395 { 00:28:10.395 "name": "NVMe0", 00:28:10.395 "trtype": "tcp", 00:28:10.395 "traddr": "10.0.0.2", 00:28:10.395 "hostaddr": "10.0.0.2", 00:28:10.395 "hostsvcid": "60000", 00:28:10.395 "adrfam": "ipv4", 00:28:10.395 "trsvcid": "4420", 00:28:10.395 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:10.395 "method": "bdev_nvme_attach_controller", 00:28:10.395 "req_id": 1 00:28:10.395 } 00:28:10.395 Got JSON-RPC error response 00:28:10.395 response: 00:28:10.395 { 00:28:10.395 "code": -114, 00:28:10.395 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:10.395 } 00:28:10.395 10:23:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:10.395 10:23:23 -- common/autotest_common.sh@643 -- # es=1 00:28:10.395 10:23:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:10.395 10:23:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:10.395 10:23:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:10.395 10:23:23 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:10.395 10:23:23 -- common/autotest_common.sh@640 -- # local es=0 00:28:10.395 10:23:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:10.395 10:23:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:10.395 10:23:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:10.395 10:23:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:10.395 10:23:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:10.396 10:23:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:10.396 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.396 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.396 request: 00:28:10.396 { 00:28:10.396 "name": "NVMe0", 00:28:10.396 "trtype": "tcp", 00:28:10.396 "traddr": "10.0.0.2", 00:28:10.396 "hostaddr": "10.0.0.2", 00:28:10.396 "hostsvcid": "60000", 00:28:10.396 "adrfam": "ipv4", 00:28:10.396 "trsvcid": "4420", 00:28:10.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.396 "multipath": "disable", 00:28:10.396 "method": "bdev_nvme_attach_controller", 00:28:10.396 "req_id": 1 00:28:10.396 } 00:28:10.396 Got JSON-RPC error response 00:28:10.396 response: 00:28:10.396 { 00:28:10.396 "code": -114, 00:28:10.396 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:10.396 } 00:28:10.396 10:23:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:10.396 10:23:23 -- common/autotest_common.sh@643 -- # es=1 00:28:10.396 10:23:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:10.396 10:23:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:10.396 10:23:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:10.396 10:23:23 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:10.396 10:23:23 -- common/autotest_common.sh@640 -- # local es=0 00:28:10.396 10:23:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:10.396 10:23:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:10.396 10:23:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:10.396 10:23:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:10.396 10:23:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:10.396 10:23:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:10.396 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.396 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.396 request: 00:28:10.396 { 00:28:10.396 "name": "NVMe0", 00:28:10.396 "trtype": "tcp", 00:28:10.396 "traddr": "10.0.0.2", 00:28:10.396 "hostaddr": "10.0.0.2", 00:28:10.396 "hostsvcid": "60000", 00:28:10.396 "adrfam": "ipv4", 00:28:10.396 "trsvcid": "4420", 00:28:10.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.396 "multipath": "failover", 00:28:10.396 "method": "bdev_nvme_attach_controller", 00:28:10.396 "req_id": 1 00:28:10.396 } 00:28:10.396 Got JSON-RPC error response 00:28:10.396 response: 00:28:10.396 { 00:28:10.396 "code": -114, 00:28:10.396 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:10.396 } 00:28:10.396 10:23:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:10.396 10:23:23 -- common/autotest_common.sh@643 -- # es=1 00:28:10.396 10:23:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:10.396 10:23:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:10.396 10:23:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:10.396 10:23:23 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:10.396 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.396 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.653 00:28:10.653 10:23:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.653 10:23:23 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:10.653 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.653 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.653 10:23:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.653 10:23:23 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:10.653 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.653 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.653 00:28:10.653 10:23:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.653 10:23:23 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:10.653 10:23:23 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.653 10:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.653 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:28:10.653 10:23:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.653 10:23:23 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:10.653 10:23:23 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:12.024 0 00:28:12.024 10:23:24 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:12.024 10:23:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.024 10:23:24 -- common/autotest_common.sh@10 -- # set +x 00:28:12.024 10:23:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.024 10:23:25 -- host/multicontroller.sh@100 -- # killprocess 423613 00:28:12.024 10:23:25 -- common/autotest_common.sh@926 -- # '[' -z 423613 ']' 00:28:12.024 10:23:25 -- common/autotest_common.sh@930 -- # kill -0 423613 00:28:12.024 10:23:25 -- common/autotest_common.sh@931 -- # uname 00:28:12.024 10:23:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:12.024 10:23:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 423613 00:28:12.024 10:23:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:12.024 10:23:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:12.024 10:23:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 423613' 00:28:12.024 killing process with pid 423613 00:28:12.024 10:23:25 -- common/autotest_common.sh@945 -- # kill 423613 00:28:12.024 10:23:25 -- common/autotest_common.sh@950 -- # wait 423613 00:28:12.024 10:23:25 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.024 10:23:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.024 10:23:25 -- common/autotest_common.sh@10 -- # set +x 00:28:12.024 10:23:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.024 10:23:25 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:12.024 10:23:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.024 10:23:25 -- common/autotest_common.sh@10 -- # set +x 00:28:12.024 10:23:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.024 10:23:25 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:12.024 10:23:25 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:12.024 10:23:25 -- common/autotest_common.sh@1597 -- # read -r file 00:28:12.024 10:23:25 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:12.024 10:23:25 -- common/autotest_common.sh@1596 -- # sort -u 00:28:12.024 10:23:25 -- common/autotest_common.sh@1598 -- # cat 00:28:12.024 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:12.024 [2024-04-24 10:23:22.611528] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:12.024 [2024-04-24 10:23:22.611580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423613 ] 00:28:12.024 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.024 [2024-04-24 10:23:22.665946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.024 [2024-04-24 10:23:22.744746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.024 [2024-04-24 10:23:23.860898] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name dabde722-886d-4de7-9fcc-df114f1843a3 already exists 00:28:12.024 [2024-04-24 10:23:23.860926] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:dabde722-886d-4de7-9fcc-df114f1843a3 alias for bdev NVMe1n1 00:28:12.024 [2024-04-24 10:23:23.860936] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:12.024 Running I/O for 1 seconds... 00:28:12.024 00:28:12.024 Latency(us) 00:28:12.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.024 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:12.024 NVMe0n1 : 1.00 24848.46 97.06 0.00 0.00 5140.66 3618.73 10770.70 00:28:12.024 =================================================================================================================== 00:28:12.024 Total : 24848.46 97.06 0.00 0.00 5140.66 3618.73 10770.70 00:28:12.024 Received shutdown signal, test time was about 1.000000 seconds 00:28:12.024 00:28:12.024 Latency(us) 00:28:12.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.024 =================================================================================================================== 00:28:12.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.024 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:12.024 10:23:25 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:12.024 10:23:25 -- common/autotest_common.sh@1597 -- # read -r file 00:28:12.024 10:23:25 -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:12.024 10:23:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:12.024 10:23:25 -- nvmf/common.sh@116 -- # sync 00:28:12.282 10:23:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:12.282 10:23:25 -- nvmf/common.sh@119 -- # set +e 00:28:12.282 10:23:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:12.282 10:23:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:12.282 rmmod nvme_tcp 00:28:12.282 rmmod nvme_fabrics 00:28:12.282 rmmod nvme_keyring 00:28:12.282 10:23:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:12.282 10:23:25 -- nvmf/common.sh@123 -- # set -e 00:28:12.282 10:23:25 -- nvmf/common.sh@124 -- # return 0 00:28:12.282 10:23:25 -- nvmf/common.sh@477 -- # '[' -n 423446 ']' 00:28:12.282 10:23:25 -- nvmf/common.sh@478 -- # killprocess 423446 00:28:12.282 10:23:25 -- common/autotest_common.sh@926 -- # '[' -z 423446 ']' 00:28:12.282 10:23:25 -- common/autotest_common.sh@930 -- # kill -0 423446 00:28:12.282 10:23:25 -- common/autotest_common.sh@931 -- # uname 00:28:12.282 10:23:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:12.282 10:23:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 423446 00:28:12.282 10:23:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:12.282 10:23:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:12.282 10:23:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 423446' 00:28:12.282 killing process with pid 423446 00:28:12.282 10:23:25 -- common/autotest_common.sh@945 -- # kill 423446 00:28:12.282 10:23:25 -- common/autotest_common.sh@950 -- # wait 423446 00:28:12.540 10:23:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:12.540 10:23:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:12.540 10:23:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:12.540 10:23:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.540 10:23:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:12.540 10:23:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.540 10:23:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.540 10:23:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.441 10:23:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:14.441 00:28:14.441 real 0m11.560s 00:28:14.441 user 0m16.023s 00:28:14.441 sys 0m4.774s 00:28:14.441 10:23:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.441 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:28:14.441 ************************************ 00:28:14.441 END TEST nvmf_multicontroller 00:28:14.441 ************************************ 00:28:14.699 10:23:27 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:14.699 10:23:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:14.699 10:23:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:14.699 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:28:14.699 ************************************ 00:28:14.699 START TEST nvmf_aer 00:28:14.699 ************************************ 00:28:14.699 10:23:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:14.699 * Looking for test storage... 00:28:14.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.699 10:23:27 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.699 10:23:27 -- nvmf/common.sh@7 -- # uname -s 00:28:14.699 10:23:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.699 10:23:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.699 10:23:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.699 10:23:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.699 10:23:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.699 10:23:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.699 10:23:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.699 10:23:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.699 10:23:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.699 10:23:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.699 10:23:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:14.699 10:23:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:14.699 10:23:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.699 10:23:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.699 10:23:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.699 10:23:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.699 10:23:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.699 10:23:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.699 10:23:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.699 10:23:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.699 10:23:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.699 10:23:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.699 10:23:27 -- paths/export.sh@5 -- # export PATH 00:28:14.700 10:23:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.700 10:23:27 -- nvmf/common.sh@46 -- # : 0 00:28:14.700 10:23:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:14.700 10:23:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:14.700 10:23:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:14.700 10:23:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.700 10:23:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.700 10:23:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:14.700 10:23:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:14.700 10:23:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:14.700 10:23:27 -- host/aer.sh@11 -- # nvmftestinit 00:28:14.700 10:23:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:14.700 10:23:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.700 10:23:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:14.700 10:23:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:14.700 10:23:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:14.700 10:23:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.700 10:23:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.700 10:23:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.700 10:23:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:14.700 10:23:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:14.700 10:23:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:14.700 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:28:19.962 10:23:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:19.962 10:23:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:19.962 10:23:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:19.962 10:23:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:19.962 10:23:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:19.962 10:23:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:19.962 10:23:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:19.962 10:23:32 -- nvmf/common.sh@294 -- # net_devs=() 00:28:19.962 10:23:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:19.962 10:23:32 -- nvmf/common.sh@295 -- # e810=() 00:28:19.962 10:23:32 -- nvmf/common.sh@295 -- # local -ga e810 00:28:19.962 10:23:32 -- nvmf/common.sh@296 -- # x722=() 00:28:19.962 10:23:32 -- nvmf/common.sh@296 -- # local -ga x722 00:28:19.962 10:23:32 -- nvmf/common.sh@297 -- # mlx=() 00:28:19.962 10:23:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:19.962 10:23:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.962 10:23:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:19.962 10:23:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:19.962 10:23:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:19.962 10:23:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:19.962 10:23:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:19.962 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:19.962 10:23:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:19.962 10:23:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:19.962 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:19.962 10:23:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:19.962 10:23:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:19.962 10:23:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.962 10:23:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:19.962 10:23:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.962 10:23:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:19.962 Found net devices under 0000:86:00.0: cvl_0_0 00:28:19.962 10:23:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.962 10:23:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:19.962 10:23:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.962 10:23:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:19.962 10:23:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.962 10:23:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:19.962 Found net devices under 0000:86:00.1: cvl_0_1 00:28:19.962 10:23:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.962 10:23:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:19.962 10:23:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:19.962 10:23:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:19.962 10:23:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:19.962 10:23:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.962 10:23:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.962 10:23:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.963 10:23:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:19.963 10:23:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.963 10:23:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.963 10:23:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:19.963 10:23:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.963 10:23:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.963 10:23:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:19.963 10:23:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:19.963 10:23:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.963 10:23:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.963 10:23:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.963 10:23:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.963 10:23:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:19.963 10:23:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.963 10:23:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.963 10:23:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.963 10:23:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:19.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:28:19.963 00:28:19.963 --- 10.0.0.2 ping statistics --- 00:28:19.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.963 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:19.963 10:23:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:28:19.963 00:28:19.963 --- 10.0.0.1 ping statistics --- 00:28:19.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.963 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:28:19.963 10:23:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.963 10:23:32 -- nvmf/common.sh@410 -- # return 0 00:28:19.963 10:23:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:19.963 10:23:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.963 10:23:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:19.963 10:23:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:19.963 10:23:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.963 10:23:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:19.963 10:23:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:19.963 10:23:32 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:19.963 10:23:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:19.963 10:23:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:19.963 10:23:32 -- common/autotest_common.sh@10 -- # set +x 00:28:19.963 10:23:32 -- nvmf/common.sh@469 -- # nvmfpid=427422 00:28:19.963 10:23:32 -- nvmf/common.sh@470 -- # waitforlisten 427422 00:28:19.963 10:23:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:19.963 10:23:32 -- common/autotest_common.sh@819 -- # '[' -z 427422 ']' 00:28:19.963 10:23:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.963 10:23:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:19.963 10:23:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.963 10:23:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:19.963 10:23:32 -- common/autotest_common.sh@10 -- # set +x 00:28:19.963 [2024-04-24 10:23:32.798673] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:19.963 [2024-04-24 10:23:32.798714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.963 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.963 [2024-04-24 10:23:32.854974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.963 [2024-04-24 10:23:32.933527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:19.963 [2024-04-24 10:23:32.933651] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.963 [2024-04-24 10:23:32.933660] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.963 [2024-04-24 10:23:32.933666] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.963 [2024-04-24 10:23:32.933706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.963 [2024-04-24 10:23:32.933805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.963 [2024-04-24 10:23:32.933865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.963 [2024-04-24 10:23:32.933867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.526 10:23:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:20.527 10:23:33 -- common/autotest_common.sh@852 -- # return 0 00:28:20.527 10:23:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:20.527 10:23:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:20.527 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 10:23:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.527 10:23:33 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.527 10:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.527 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 [2024-04-24 10:23:33.649410] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.527 10:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.527 10:23:33 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:20.527 10:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.527 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 Malloc0 00:28:20.527 10:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.527 10:23:33 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:20.527 10:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.527 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 10:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.527 10:23:33 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:20.527 10:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.527 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 10:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.527 10:23:33 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.527 10:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.527 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 [2024-04-24 10:23:33.701239] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.527 10:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.527 10:23:33 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:20.527 10:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.527 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 [2024-04-24 10:23:33.709049] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:20.527 [ 00:28:20.527 { 00:28:20.527 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:20.527 "subtype": "Discovery", 00:28:20.527 "listen_addresses": [], 00:28:20.527 "allow_any_host": true, 00:28:20.527 "hosts": [] 00:28:20.527 }, 00:28:20.527 { 00:28:20.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.527 "subtype": "NVMe", 00:28:20.527 "listen_addresses": [ 00:28:20.527 { 00:28:20.527 "transport": "TCP", 00:28:20.527 "trtype": "TCP", 00:28:20.527 "adrfam": "IPv4", 00:28:20.527 "traddr": "10.0.0.2", 00:28:20.527 "trsvcid": "4420" 00:28:20.527 } 00:28:20.527 ], 00:28:20.527 "allow_any_host": true, 00:28:20.527 "hosts": [], 00:28:20.527 "serial_number": "SPDK00000000000001", 00:28:20.527 "model_number": "SPDK bdev Controller", 00:28:20.527 "max_namespaces": 2, 00:28:20.527 "min_cntlid": 1, 00:28:20.527 "max_cntlid": 65519, 00:28:20.527 "namespaces": [ 00:28:20.527 { 00:28:20.527 "nsid": 1, 00:28:20.527 "bdev_name": "Malloc0", 00:28:20.527 "name": "Malloc0", 00:28:20.527 "nguid": "B0CE141ABB7948B383698EA6B458A3E0", 00:28:20.527 "uuid": "b0ce141a-bb79-48b3-8369-8ea6b458a3e0" 00:28:20.527 } 00:28:20.527 ] 00:28:20.527 } 00:28:20.527 ] 00:28:20.527 10:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.527 10:23:33 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:20.527 10:23:33 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:20.527 10:23:33 -- host/aer.sh@33 -- # aerpid=427652 00:28:20.527 10:23:33 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:20.527 10:23:33 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:20.527 10:23:33 -- common/autotest_common.sh@1244 -- # local i=0 00:28:20.527 10:23:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:20.527 10:23:33 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:28:20.527 10:23:33 -- common/autotest_common.sh@1247 -- # i=1 00:28:20.527 10:23:33 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:20.527 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.784 10:23:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:20.784 10:23:33 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:28:20.784 10:23:33 -- common/autotest_common.sh@1247 -- # i=2 00:28:20.784 10:23:33 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:20.784 10:23:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:20.784 10:23:33 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:20.784 10:23:33 -- common/autotest_common.sh@1255 -- # return 0 00:28:20.784 10:23:33 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:20.784 10:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.784 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.784 Malloc1 00:28:20.784 10:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.784 10:23:33 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:20.784 10:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.784 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.784 10:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.784 10:23:33 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:20.784 10:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.784 10:23:33 -- common/autotest_common.sh@10 -- # set +x 00:28:20.784 Asynchronous Event Request test 00:28:20.784 Attaching to 10.0.0.2 00:28:20.784 Attached to 10.0.0.2 00:28:20.784 Registering asynchronous event callbacks... 00:28:20.784 Starting namespace attribute notice tests for all controllers... 00:28:20.784 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:20.784 aer_cb - Changed Namespace 00:28:20.784 Cleaning up... 00:28:20.784 [ 00:28:20.784 { 00:28:20.784 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:20.784 "subtype": "Discovery", 00:28:20.784 "listen_addresses": [], 00:28:20.784 "allow_any_host": true, 00:28:20.784 "hosts": [] 00:28:20.784 }, 00:28:20.784 { 00:28:20.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.784 "subtype": "NVMe", 00:28:20.784 "listen_addresses": [ 00:28:20.784 { 00:28:20.784 "transport": "TCP", 00:28:20.784 "trtype": "TCP", 00:28:20.784 "adrfam": "IPv4", 00:28:20.784 "traddr": "10.0.0.2", 00:28:20.784 "trsvcid": "4420" 00:28:20.784 } 00:28:20.784 ], 00:28:20.784 "allow_any_host": true, 00:28:20.784 "hosts": [], 00:28:20.784 "serial_number": "SPDK00000000000001", 00:28:20.784 "model_number": "SPDK bdev Controller", 00:28:20.784 "max_namespaces": 2, 00:28:20.784 "min_cntlid": 1, 00:28:20.784 "max_cntlid": 65519, 00:28:20.784 "namespaces": [ 00:28:20.784 { 00:28:20.784 "nsid": 1, 00:28:20.784 "bdev_name": "Malloc0", 00:28:20.784 "name": "Malloc0", 00:28:20.784 "nguid": "B0CE141ABB7948B383698EA6B458A3E0", 00:28:20.784 "uuid": "b0ce141a-bb79-48b3-8369-8ea6b458a3e0" 00:28:20.784 }, 00:28:20.784 { 00:28:20.784 "nsid": 2, 00:28:20.784 "bdev_name": "Malloc1", 00:28:20.784 "name": "Malloc1", 00:28:20.784 "nguid": "6FF9771E98414A73967EE991D2A746B2", 00:28:20.784 "uuid": "6ff9771e-9841-4a73-967e-e991d2a746b2" 00:28:20.784 } 00:28:20.784 ] 00:28:20.784 } 00:28:20.784 ] 00:28:20.784 10:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.784 10:23:34 -- host/aer.sh@43 -- # wait 427652 00:28:20.784 10:23:34 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:20.784 10:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.784 10:23:34 -- common/autotest_common.sh@10 -- # set +x 00:28:20.784 10:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.784 10:23:34 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:20.784 10:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.784 10:23:34 -- common/autotest_common.sh@10 -- # set +x 00:28:20.784 10:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.784 10:23:34 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.784 10:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.784 10:23:34 -- common/autotest_common.sh@10 -- # set +x 00:28:21.042 10:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.042 10:23:34 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:21.042 10:23:34 -- host/aer.sh@51 -- # nvmftestfini 00:28:21.042 10:23:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:21.042 10:23:34 -- nvmf/common.sh@116 -- # sync 00:28:21.042 10:23:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:21.042 10:23:34 -- nvmf/common.sh@119 -- # set +e 00:28:21.042 10:23:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:21.042 10:23:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:21.042 rmmod nvme_tcp 00:28:21.042 rmmod nvme_fabrics 00:28:21.042 rmmod nvme_keyring 00:28:21.042 10:23:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:21.042 10:23:34 -- nvmf/common.sh@123 -- # set -e 00:28:21.042 10:23:34 -- nvmf/common.sh@124 -- # return 0 00:28:21.042 10:23:34 -- nvmf/common.sh@477 -- # '[' -n 427422 ']' 00:28:21.042 10:23:34 -- nvmf/common.sh@478 -- # killprocess 427422 00:28:21.042 10:23:34 -- common/autotest_common.sh@926 -- # '[' -z 427422 ']' 00:28:21.042 10:23:34 -- common/autotest_common.sh@930 -- # kill -0 427422 00:28:21.042 10:23:34 -- common/autotest_common.sh@931 -- # uname 00:28:21.042 10:23:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:21.042 10:23:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 427422 00:28:21.042 10:23:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:21.042 10:23:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:21.042 10:23:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 427422' 00:28:21.042 killing process with pid 427422 00:28:21.042 10:23:34 -- common/autotest_common.sh@945 -- # kill 427422 00:28:21.042 [2024-04-24 10:23:34.178982] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:21.042 10:23:34 -- common/autotest_common.sh@950 -- # wait 427422 00:28:21.298 10:23:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:21.298 10:23:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:21.298 10:23:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:21.298 10:23:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:21.298 10:23:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:21.298 10:23:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.298 10:23:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.298 10:23:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.251 10:23:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:23.251 00:28:23.251 real 0m8.697s 00:28:23.251 user 0m7.050s 00:28:23.251 sys 0m4.125s 00:28:23.251 10:23:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.251 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:28:23.251 ************************************ 00:28:23.251 END TEST nvmf_aer 00:28:23.251 ************************************ 00:28:23.251 10:23:36 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:23.251 10:23:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:23.251 10:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:23.251 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:28:23.251 ************************************ 00:28:23.251 START TEST nvmf_async_init 00:28:23.251 ************************************ 00:28:23.251 10:23:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:23.508 * Looking for test storage... 00:28:23.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.508 10:23:36 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.509 10:23:36 -- nvmf/common.sh@7 -- # uname -s 00:28:23.509 10:23:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.509 10:23:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.509 10:23:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.509 10:23:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.509 10:23:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.509 10:23:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.509 10:23:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.509 10:23:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.509 10:23:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.509 10:23:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.509 10:23:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:23.509 10:23:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:23.509 10:23:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.509 10:23:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.509 10:23:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.509 10:23:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.509 10:23:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.509 10:23:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.509 10:23:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.509 10:23:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.509 10:23:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.509 10:23:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.509 10:23:36 -- paths/export.sh@5 -- # export PATH 00:28:23.509 10:23:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.509 10:23:36 -- nvmf/common.sh@46 -- # : 0 00:28:23.509 10:23:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:23.509 10:23:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:23.509 10:23:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:23.509 10:23:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.509 10:23:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.509 10:23:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:23.509 10:23:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:23.509 10:23:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:23.509 10:23:36 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:23.509 10:23:36 -- host/async_init.sh@14 -- # null_block_size=512 00:28:23.509 10:23:36 -- host/async_init.sh@15 -- # null_bdev=null0 00:28:23.509 10:23:36 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:23.509 10:23:36 -- host/async_init.sh@20 -- # uuidgen 00:28:23.509 10:23:36 -- host/async_init.sh@20 -- # tr -d - 00:28:23.509 10:23:36 -- host/async_init.sh@20 -- # nguid=decbba929ddd4ac6817cd13b0936d702 00:28:23.509 10:23:36 -- host/async_init.sh@22 -- # nvmftestinit 00:28:23.509 10:23:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:23.509 10:23:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.509 10:23:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:23.509 10:23:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:23.509 10:23:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:23.509 10:23:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.509 10:23:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.509 10:23:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.509 10:23:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:23.509 10:23:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:23.509 10:23:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:23.509 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:28:28.801 10:23:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:28.801 10:23:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:28.801 10:23:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:28.801 10:23:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:28.801 10:23:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:28.801 10:23:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:28.801 10:23:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:28.801 10:23:41 -- nvmf/common.sh@294 -- # net_devs=() 00:28:28.801 10:23:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:28.801 10:23:41 -- nvmf/common.sh@295 -- # e810=() 00:28:28.801 10:23:41 -- nvmf/common.sh@295 -- # local -ga e810 00:28:28.801 10:23:41 -- nvmf/common.sh@296 -- # x722=() 00:28:28.801 10:23:41 -- nvmf/common.sh@296 -- # local -ga x722 00:28:28.801 10:23:41 -- nvmf/common.sh@297 -- # mlx=() 00:28:28.801 10:23:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:28.801 10:23:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.801 10:23:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:28.801 10:23:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:28.801 10:23:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:28.801 10:23:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.801 10:23:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:28.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:28.801 10:23:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.801 10:23:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:28.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:28.801 10:23:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:28.801 10:23:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.801 10:23:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.801 10:23:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.801 10:23:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.801 10:23:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:28.801 Found net devices under 0000:86:00.0: cvl_0_0 00:28:28.801 10:23:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.801 10:23:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.801 10:23:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.801 10:23:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.801 10:23:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.801 10:23:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:28.801 Found net devices under 0000:86:00.1: cvl_0_1 00:28:28.801 10:23:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.801 10:23:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:28.801 10:23:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:28.801 10:23:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:28.801 10:23:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.801 10:23:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.801 10:23:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.801 10:23:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:28.801 10:23:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.801 10:23:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.801 10:23:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:28.801 10:23:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.801 10:23:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.801 10:23:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:28.801 10:23:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:28.801 10:23:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.801 10:23:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.801 10:23:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.801 10:23:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.801 10:23:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:28.801 10:23:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.801 10:23:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.801 10:23:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.801 10:23:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:28.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:28:28.801 00:28:28.801 --- 10.0.0.2 ping statistics --- 00:28:28.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.801 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:28:28.801 10:23:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:28:28.801 00:28:28.801 --- 10.0.0.1 ping statistics --- 00:28:28.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.801 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:28:28.801 10:23:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.801 10:23:41 -- nvmf/common.sh@410 -- # return 0 00:28:28.801 10:23:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:28.801 10:23:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.801 10:23:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:28.801 10:23:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.801 10:23:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:28.801 10:23:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:28.801 10:23:41 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:28.801 10:23:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:28.801 10:23:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:28.801 10:23:41 -- common/autotest_common.sh@10 -- # set +x 00:28:28.801 10:23:41 -- nvmf/common.sh@469 -- # nvmfpid=431185 00:28:28.801 10:23:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:28.801 10:23:41 -- nvmf/common.sh@470 -- # waitforlisten 431185 00:28:28.801 10:23:41 -- common/autotest_common.sh@819 -- # '[' -z 431185 ']' 00:28:28.801 10:23:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.801 10:23:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:28.801 10:23:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.801 10:23:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:28.801 10:23:41 -- common/autotest_common.sh@10 -- # set +x 00:28:28.801 [2024-04-24 10:23:41.693120] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:28.801 [2024-04-24 10:23:41.693163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.801 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.802 [2024-04-24 10:23:41.750661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.802 [2024-04-24 10:23:41.819961] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:28.802 [2024-04-24 10:23:41.820079] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.802 [2024-04-24 10:23:41.820087] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.802 [2024-04-24 10:23:41.820094] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.802 [2024-04-24 10:23:41.820131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.367 10:23:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:29.367 10:23:42 -- common/autotest_common.sh@852 -- # return 0 00:28:29.367 10:23:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:29.367 10:23:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:29.367 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.367 10:23:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.367 10:23:42 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:29.367 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.367 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.367 [2024-04-24 10:23:42.523211] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.367 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.367 10:23:42 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:29.367 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.367 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.367 null0 00:28:29.367 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.367 10:23:42 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:29.367 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.367 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.367 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.367 10:23:42 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:29.367 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.367 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.367 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.367 10:23:42 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g decbba929ddd4ac6817cd13b0936d702 00:28:29.367 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.367 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.367 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.367 10:23:42 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:29.367 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.367 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.367 [2024-04-24 10:23:42.567442] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.367 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.367 10:23:42 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:29.367 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.367 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.624 nvme0n1 00:28:29.624 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.624 10:23:42 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:29.625 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.625 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.625 [ 00:28:29.625 { 00:28:29.625 "name": "nvme0n1", 00:28:29.625 "aliases": [ 00:28:29.625 "decbba92-9ddd-4ac6-817c-d13b0936d702" 00:28:29.625 ], 00:28:29.625 "product_name": "NVMe disk", 00:28:29.625 "block_size": 512, 00:28:29.625 "num_blocks": 2097152, 00:28:29.625 "uuid": "decbba92-9ddd-4ac6-817c-d13b0936d702", 00:28:29.625 "assigned_rate_limits": { 00:28:29.625 "rw_ios_per_sec": 0, 00:28:29.625 "rw_mbytes_per_sec": 0, 00:28:29.625 "r_mbytes_per_sec": 0, 00:28:29.625 "w_mbytes_per_sec": 0 00:28:29.625 }, 00:28:29.625 "claimed": false, 00:28:29.625 "zoned": false, 00:28:29.625 "supported_io_types": { 00:28:29.625 "read": true, 00:28:29.625 "write": true, 00:28:29.625 "unmap": false, 00:28:29.625 "write_zeroes": true, 00:28:29.625 "flush": true, 00:28:29.625 "reset": true, 00:28:29.625 "compare": true, 00:28:29.625 "compare_and_write": true, 00:28:29.625 "abort": true, 00:28:29.625 "nvme_admin": true, 00:28:29.625 "nvme_io": true 00:28:29.625 }, 00:28:29.625 "driver_specific": { 00:28:29.625 "nvme": [ 00:28:29.625 { 00:28:29.625 "trid": { 00:28:29.625 "trtype": "TCP", 00:28:29.625 "adrfam": "IPv4", 00:28:29.625 "traddr": "10.0.0.2", 00:28:29.625 "trsvcid": "4420", 00:28:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:29.625 }, 00:28:29.625 "ctrlr_data": { 00:28:29.625 "cntlid": 1, 00:28:29.625 "vendor_id": "0x8086", 00:28:29.625 "model_number": "SPDK bdev Controller", 00:28:29.625 "serial_number": "00000000000000000000", 00:28:29.625 "firmware_revision": "24.01.1", 00:28:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.625 "oacs": { 00:28:29.625 "security": 0, 00:28:29.625 "format": 0, 00:28:29.625 "firmware": 0, 00:28:29.625 "ns_manage": 0 00:28:29.625 }, 00:28:29.625 "multi_ctrlr": true, 00:28:29.625 "ana_reporting": false 00:28:29.625 }, 00:28:29.625 "vs": { 00:28:29.625 "nvme_version": "1.3" 00:28:29.625 }, 00:28:29.625 "ns_data": { 00:28:29.625 "id": 1, 00:28:29.625 "can_share": true 00:28:29.625 } 00:28:29.625 } 00:28:29.625 ], 00:28:29.625 "mp_policy": "active_passive" 00:28:29.625 } 00:28:29.625 } 00:28:29.625 ] 00:28:29.625 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.625 10:23:42 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:29.625 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.625 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.625 [2024-04-24 10:23:42.820075] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:29.625 [2024-04-24 10:23:42.820134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f9270 (9): Bad file descriptor 00:28:29.883 [2024-04-24 10:23:42.952152] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:29.883 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.884 10:23:42 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:29.884 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.884 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 [ 00:28:29.884 { 00:28:29.884 "name": "nvme0n1", 00:28:29.884 "aliases": [ 00:28:29.884 "decbba92-9ddd-4ac6-817c-d13b0936d702" 00:28:29.884 ], 00:28:29.884 "product_name": "NVMe disk", 00:28:29.884 "block_size": 512, 00:28:29.884 "num_blocks": 2097152, 00:28:29.884 "uuid": "decbba92-9ddd-4ac6-817c-d13b0936d702", 00:28:29.884 "assigned_rate_limits": { 00:28:29.884 "rw_ios_per_sec": 0, 00:28:29.884 "rw_mbytes_per_sec": 0, 00:28:29.884 "r_mbytes_per_sec": 0, 00:28:29.884 "w_mbytes_per_sec": 0 00:28:29.884 }, 00:28:29.884 "claimed": false, 00:28:29.884 "zoned": false, 00:28:29.884 "supported_io_types": { 00:28:29.884 "read": true, 00:28:29.884 "write": true, 00:28:29.884 "unmap": false, 00:28:29.884 "write_zeroes": true, 00:28:29.884 "flush": true, 00:28:29.884 "reset": true, 00:28:29.884 "compare": true, 00:28:29.884 "compare_and_write": true, 00:28:29.884 "abort": true, 00:28:29.884 "nvme_admin": true, 00:28:29.884 "nvme_io": true 00:28:29.884 }, 00:28:29.884 "driver_specific": { 00:28:29.884 "nvme": [ 00:28:29.884 { 00:28:29.884 "trid": { 00:28:29.884 "trtype": "TCP", 00:28:29.884 "adrfam": "IPv4", 00:28:29.884 "traddr": "10.0.0.2", 00:28:29.884 "trsvcid": "4420", 00:28:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:29.884 }, 00:28:29.884 "ctrlr_data": { 00:28:29.884 "cntlid": 2, 00:28:29.884 "vendor_id": "0x8086", 00:28:29.884 "model_number": "SPDK bdev Controller", 00:28:29.884 "serial_number": "00000000000000000000", 00:28:29.884 "firmware_revision": "24.01.1", 00:28:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.884 "oacs": { 00:28:29.884 "security": 0, 00:28:29.884 "format": 0, 00:28:29.884 "firmware": 0, 00:28:29.884 "ns_manage": 0 00:28:29.884 }, 00:28:29.884 "multi_ctrlr": true, 00:28:29.884 "ana_reporting": false 00:28:29.884 }, 00:28:29.884 "vs": { 00:28:29.884 "nvme_version": "1.3" 00:28:29.884 }, 00:28:29.884 "ns_data": { 00:28:29.884 "id": 1, 00:28:29.884 "can_share": true 00:28:29.884 } 00:28:29.884 } 00:28:29.884 ], 00:28:29.884 "mp_policy": "active_passive" 00:28:29.884 } 00:28:29.884 } 00:28:29.884 ] 00:28:29.884 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.884 10:23:42 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.884 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.884 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 10:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.884 10:23:42 -- host/async_init.sh@53 -- # mktemp 00:28:29.884 10:23:42 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DnHM3sxCg6 00:28:29.884 10:23:42 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:29.884 10:23:42 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DnHM3sxCg6 00:28:29.884 10:23:42 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:29.884 10:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.884 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 10:23:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.884 10:23:43 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:29.884 10:23:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.884 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 [2024-04-24 10:23:43.008633] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:29.884 [2024-04-24 10:23:43.008761] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:29.884 10:23:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.884 10:23:43 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DnHM3sxCg6 00:28:29.884 10:23:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.884 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 10:23:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.884 10:23:43 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DnHM3sxCg6 00:28:29.884 10:23:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.884 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 [2024-04-24 10:23:43.024676] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:29.884 nvme0n1 00:28:29.884 10:23:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.884 10:23:43 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:29.884 10:23:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.884 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 [ 00:28:29.884 { 00:28:29.884 "name": "nvme0n1", 00:28:29.884 "aliases": [ 00:28:29.884 "decbba92-9ddd-4ac6-817c-d13b0936d702" 00:28:29.884 ], 00:28:29.884 "product_name": "NVMe disk", 00:28:29.884 "block_size": 512, 00:28:29.884 "num_blocks": 2097152, 00:28:29.884 "uuid": "decbba92-9ddd-4ac6-817c-d13b0936d702", 00:28:29.884 "assigned_rate_limits": { 00:28:29.884 "rw_ios_per_sec": 0, 00:28:29.884 "rw_mbytes_per_sec": 0, 00:28:29.884 "r_mbytes_per_sec": 0, 00:28:29.884 "w_mbytes_per_sec": 0 00:28:29.884 }, 00:28:29.884 "claimed": false, 00:28:29.884 "zoned": false, 00:28:29.884 "supported_io_types": { 00:28:29.884 "read": true, 00:28:29.884 "write": true, 00:28:29.884 "unmap": false, 00:28:29.884 "write_zeroes": true, 00:28:29.884 "flush": true, 00:28:29.884 "reset": true, 00:28:29.884 "compare": true, 00:28:29.884 "compare_and_write": true, 00:28:29.884 "abort": true, 00:28:29.884 "nvme_admin": true, 00:28:29.884 "nvme_io": true 00:28:29.884 }, 00:28:29.884 "driver_specific": { 00:28:29.884 "nvme": [ 00:28:29.884 { 00:28:29.884 "trid": { 00:28:29.884 "trtype": "TCP", 00:28:29.884 "adrfam": "IPv4", 00:28:29.884 "traddr": "10.0.0.2", 00:28:29.884 "trsvcid": "4421", 00:28:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:29.884 }, 00:28:29.884 "ctrlr_data": { 00:28:29.884 "cntlid": 3, 00:28:29.884 "vendor_id": "0x8086", 00:28:29.884 "model_number": "SPDK bdev Controller", 00:28:29.885 "serial_number": "00000000000000000000", 00:28:29.885 "firmware_revision": "24.01.1", 00:28:29.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.885 "oacs": { 00:28:29.885 "security": 0, 00:28:29.885 "format": 0, 00:28:29.885 "firmware": 0, 00:28:29.885 "ns_manage": 0 00:28:29.885 }, 00:28:29.885 "multi_ctrlr": true, 00:28:29.885 "ana_reporting": false 00:28:29.885 }, 00:28:29.885 "vs": { 00:28:29.885 "nvme_version": "1.3" 00:28:29.885 }, 00:28:29.885 "ns_data": { 00:28:29.885 "id": 1, 00:28:29.885 "can_share": true 00:28:29.885 } 00:28:29.885 } 00:28:29.885 ], 00:28:29.885 "mp_policy": "active_passive" 00:28:29.885 } 00:28:29.885 } 00:28:29.885 ] 00:28:29.885 10:23:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.885 10:23:43 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.885 10:23:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.885 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.885 10:23:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.885 10:23:43 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.DnHM3sxCg6 00:28:29.885 10:23:43 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:29.885 10:23:43 -- host/async_init.sh@78 -- # nvmftestfini 00:28:29.885 10:23:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:29.885 10:23:43 -- nvmf/common.sh@116 -- # sync 00:28:29.885 10:23:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:29.885 10:23:43 -- nvmf/common.sh@119 -- # set +e 00:28:29.885 10:23:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:29.885 10:23:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:29.885 rmmod nvme_tcp 00:28:29.885 rmmod nvme_fabrics 00:28:30.143 rmmod nvme_keyring 00:28:30.143 10:23:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:30.143 10:23:43 -- nvmf/common.sh@123 -- # set -e 00:28:30.143 10:23:43 -- nvmf/common.sh@124 -- # return 0 00:28:30.143 10:23:43 -- nvmf/common.sh@477 -- # '[' -n 431185 ']' 00:28:30.143 10:23:43 -- nvmf/common.sh@478 -- # killprocess 431185 00:28:30.143 10:23:43 -- common/autotest_common.sh@926 -- # '[' -z 431185 ']' 00:28:30.143 10:23:43 -- common/autotest_common.sh@930 -- # kill -0 431185 00:28:30.143 10:23:43 -- common/autotest_common.sh@931 -- # uname 00:28:30.143 10:23:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:30.143 10:23:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 431185 00:28:30.143 10:23:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:30.143 10:23:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:30.143 10:23:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 431185' 00:28:30.143 killing process with pid 431185 00:28:30.143 10:23:43 -- common/autotest_common.sh@945 -- # kill 431185 00:28:30.143 10:23:43 -- common/autotest_common.sh@950 -- # wait 431185 00:28:30.401 10:23:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:30.401 10:23:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:30.401 10:23:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:30.401 10:23:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:30.401 10:23:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:30.401 10:23:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.401 10:23:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.401 10:23:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.304 10:23:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:32.304 00:28:32.304 real 0m8.997s 00:28:32.304 user 0m3.355s 00:28:32.304 sys 0m4.154s 00:28:32.304 10:23:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.304 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:28:32.304 ************************************ 00:28:32.304 END TEST nvmf_async_init 00:28:32.304 ************************************ 00:28:32.304 10:23:45 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:32.304 10:23:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:32.304 10:23:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:32.304 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:28:32.304 ************************************ 00:28:32.304 START TEST dma 00:28:32.304 ************************************ 00:28:32.304 10:23:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:32.563 * Looking for test storage... 00:28:32.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.563 10:23:45 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.563 10:23:45 -- nvmf/common.sh@7 -- # uname -s 00:28:32.563 10:23:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.563 10:23:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.563 10:23:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.563 10:23:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.563 10:23:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.563 10:23:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.563 10:23:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.563 10:23:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.563 10:23:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.563 10:23:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.563 10:23:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:32.563 10:23:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:32.563 10:23:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.563 10:23:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.563 10:23:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.563 10:23:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.563 10:23:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.563 10:23:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.563 10:23:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.563 10:23:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.563 10:23:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.563 10:23:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.563 10:23:45 -- paths/export.sh@5 -- # export PATH 00:28:32.564 10:23:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.564 10:23:45 -- nvmf/common.sh@46 -- # : 0 00:28:32.564 10:23:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:32.564 10:23:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:32.564 10:23:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:32.564 10:23:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.564 10:23:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.564 10:23:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:32.564 10:23:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:32.564 10:23:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:32.564 10:23:45 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:32.564 10:23:45 -- host/dma.sh@13 -- # exit 0 00:28:32.564 00:28:32.564 real 0m0.076s 00:28:32.564 user 0m0.020s 00:28:32.564 sys 0m0.060s 00:28:32.564 10:23:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.564 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:28:32.564 ************************************ 00:28:32.564 END TEST dma 00:28:32.564 ************************************ 00:28:32.564 10:23:45 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:32.564 10:23:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:32.564 10:23:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:32.564 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:28:32.564 ************************************ 00:28:32.564 START TEST nvmf_identify 00:28:32.564 ************************************ 00:28:32.564 10:23:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:32.564 * Looking for test storage... 00:28:32.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.564 10:23:45 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.564 10:23:45 -- nvmf/common.sh@7 -- # uname -s 00:28:32.564 10:23:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.564 10:23:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.564 10:23:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.564 10:23:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.564 10:23:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.564 10:23:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.564 10:23:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.564 10:23:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.564 10:23:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.564 10:23:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.564 10:23:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:32.564 10:23:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:32.564 10:23:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.564 10:23:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.564 10:23:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.564 10:23:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.564 10:23:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.564 10:23:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.564 10:23:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.564 10:23:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.564 10:23:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.564 10:23:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.564 10:23:45 -- paths/export.sh@5 -- # export PATH 00:28:32.564 10:23:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.564 10:23:45 -- nvmf/common.sh@46 -- # : 0 00:28:32.564 10:23:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:32.564 10:23:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:32.564 10:23:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:32.564 10:23:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.564 10:23:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.564 10:23:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:32.564 10:23:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:32.564 10:23:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:32.564 10:23:45 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:32.564 10:23:45 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:32.564 10:23:45 -- host/identify.sh@14 -- # nvmftestinit 00:28:32.564 10:23:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:32.564 10:23:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.564 10:23:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:32.564 10:23:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:32.564 10:23:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:32.564 10:23:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.564 10:23:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.564 10:23:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.564 10:23:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:32.564 10:23:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:32.564 10:23:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:32.564 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:28:37.827 10:23:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:37.827 10:23:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:37.827 10:23:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:37.827 10:23:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:37.827 10:23:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:37.827 10:23:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:37.827 10:23:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:37.827 10:23:50 -- nvmf/common.sh@294 -- # net_devs=() 00:28:37.827 10:23:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:37.827 10:23:50 -- nvmf/common.sh@295 -- # e810=() 00:28:37.827 10:23:50 -- nvmf/common.sh@295 -- # local -ga e810 00:28:37.827 10:23:50 -- nvmf/common.sh@296 -- # x722=() 00:28:37.827 10:23:50 -- nvmf/common.sh@296 -- # local -ga x722 00:28:37.827 10:23:50 -- nvmf/common.sh@297 -- # mlx=() 00:28:37.827 10:23:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:37.827 10:23:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.827 10:23:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:37.827 10:23:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:37.827 10:23:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:37.827 10:23:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:37.827 10:23:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:37.827 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:37.827 10:23:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:37.827 10:23:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:37.827 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:37.827 10:23:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:37.827 10:23:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:37.827 10:23:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:37.827 10:23:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.827 10:23:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:37.827 10:23:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.827 10:23:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:37.827 Found net devices under 0000:86:00.0: cvl_0_0 00:28:37.827 10:23:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.828 10:23:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:37.828 10:23:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.828 10:23:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:37.828 10:23:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.828 10:23:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:37.828 Found net devices under 0000:86:00.1: cvl_0_1 00:28:37.828 10:23:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.828 10:23:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:37.828 10:23:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:37.828 10:23:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:37.828 10:23:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:37.828 10:23:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:37.828 10:23:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.828 10:23:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.828 10:23:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.828 10:23:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:37.828 10:23:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.828 10:23:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.828 10:23:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:37.828 10:23:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.828 10:23:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.828 10:23:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:37.828 10:23:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:37.828 10:23:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.828 10:23:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.828 10:23:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.828 10:23:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.828 10:23:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:37.828 10:23:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.828 10:23:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.828 10:23:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.828 10:23:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:37.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:28:37.828 00:28:37.828 --- 10.0.0.2 ping statistics --- 00:28:37.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.828 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:28:37.828 10:23:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:28:37.828 00:28:37.828 --- 10.0.0.1 ping statistics --- 00:28:37.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.828 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:28:37.828 10:23:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.828 10:23:51 -- nvmf/common.sh@410 -- # return 0 00:28:37.828 10:23:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:37.828 10:23:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.828 10:23:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:37.828 10:23:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:37.828 10:23:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.828 10:23:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:37.828 10:23:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:37.828 10:23:51 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:37.828 10:23:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:37.828 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.828 10:23:51 -- host/identify.sh@19 -- # nvmfpid=434794 00:28:37.828 10:23:51 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.828 10:23:51 -- host/identify.sh@23 -- # waitforlisten 434794 00:28:37.828 10:23:51 -- common/autotest_common.sh@819 -- # '[' -z 434794 ']' 00:28:37.828 10:23:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.828 10:23:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:37.828 10:23:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.828 10:23:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:37.828 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.828 10:23:51 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:38.087 [2024-04-24 10:23:51.145016] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:38.087 [2024-04-24 10:23:51.145059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.087 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.087 [2024-04-24 10:23:51.203493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.087 [2024-04-24 10:23:51.283897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:38.087 [2024-04-24 10:23:51.284007] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.087 [2024-04-24 10:23:51.284014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.087 [2024-04-24 10:23:51.284021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.087 [2024-04-24 10:23:51.284062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.087 [2024-04-24 10:23:51.284081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.087 [2024-04-24 10:23:51.284160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.087 [2024-04-24 10:23:51.284162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.024 10:23:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:39.024 10:23:51 -- common/autotest_common.sh@852 -- # return 0 00:28:39.024 10:23:51 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:39.024 10:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.024 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:28:39.024 [2024-04-24 10:23:51.952205] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.024 10:23:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.024 10:23:51 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:39.024 10:23:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:39.024 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:28:39.024 10:23:51 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:39.024 10:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.024 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:28:39.024 Malloc0 00:28:39.024 10:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.024 10:23:52 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.024 10:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.024 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.024 10:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.024 10:23:52 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:39.024 10:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.024 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.024 10:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.024 10:23:52 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:39.024 10:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.024 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.024 [2024-04-24 10:23:52.040273] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.024 10:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.024 10:23:52 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:39.024 10:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.024 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.024 10:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.024 10:23:52 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:39.024 10:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.024 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.024 [2024-04-24 10:23:52.056125] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:39.024 [ 00:28:39.024 { 00:28:39.024 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:39.024 "subtype": "Discovery", 00:28:39.024 "listen_addresses": [ 00:28:39.024 { 00:28:39.024 "transport": "TCP", 00:28:39.024 "trtype": "TCP", 00:28:39.024 "adrfam": "IPv4", 00:28:39.024 "traddr": "10.0.0.2", 00:28:39.024 "trsvcid": "4420" 00:28:39.024 } 00:28:39.024 ], 00:28:39.024 "allow_any_host": true, 00:28:39.024 "hosts": [] 00:28:39.024 }, 00:28:39.024 { 00:28:39.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:39.025 "subtype": "NVMe", 00:28:39.025 "listen_addresses": [ 00:28:39.025 { 00:28:39.025 "transport": "TCP", 00:28:39.025 "trtype": "TCP", 00:28:39.025 "adrfam": "IPv4", 00:28:39.025 "traddr": "10.0.0.2", 00:28:39.025 "trsvcid": "4420" 00:28:39.025 } 00:28:39.025 ], 00:28:39.025 "allow_any_host": true, 00:28:39.025 "hosts": [], 00:28:39.025 "serial_number": "SPDK00000000000001", 00:28:39.025 "model_number": "SPDK bdev Controller", 00:28:39.025 "max_namespaces": 32, 00:28:39.025 "min_cntlid": 1, 00:28:39.025 "max_cntlid": 65519, 00:28:39.025 "namespaces": [ 00:28:39.025 { 00:28:39.025 "nsid": 1, 00:28:39.025 "bdev_name": "Malloc0", 00:28:39.025 "name": "Malloc0", 00:28:39.025 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:39.025 "eui64": "ABCDEF0123456789", 00:28:39.025 "uuid": "249e5619-2f98-4bbf-86a4-b1a9a7877ba3" 00:28:39.025 } 00:28:39.025 ] 00:28:39.025 } 00:28:39.025 ] 00:28:39.025 10:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.025 10:23:52 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:39.025 [2024-04-24 10:23:52.089611] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:39.025 [2024-04-24 10:23:52.089647] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435043 ] 00:28:39.025 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.025 [2024-04-24 10:23:52.117602] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:39.025 [2024-04-24 10:23:52.117648] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:39.025 [2024-04-24 10:23:52.117653] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:39.025 [2024-04-24 10:23:52.117664] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:39.025 [2024-04-24 10:23:52.117671] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:39.025 [2024-04-24 10:23:52.118064] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:39.025 [2024-04-24 10:23:52.118097] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x237a9e0 0 00:28:39.025 [2024-04-24 10:23:52.128077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:39.025 [2024-04-24 10:23:52.128091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:39.025 [2024-04-24 10:23:52.128095] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:39.025 [2024-04-24 10:23:52.128098] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:39.025 [2024-04-24 10:23:52.128135] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.128141] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.128145] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.025 [2024-04-24 10:23:52.128157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:39.025 [2024-04-24 10:23:52.128174] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.025 [2024-04-24 10:23:52.135080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.025 [2024-04-24 10:23:52.135089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.025 [2024-04-24 10:23:52.135092] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2730) on tqpair=0x237a9e0 00:28:39.025 [2024-04-24 10:23:52.135107] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:39.025 [2024-04-24 10:23:52.135113] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:39.025 [2024-04-24 10:23:52.135118] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:39.025 [2024-04-24 10:23:52.135133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135140] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.025 [2024-04-24 10:23:52.135147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.025 [2024-04-24 10:23:52.135160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.025 [2024-04-24 10:23:52.135331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.025 [2024-04-24 10:23:52.135337] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.025 [2024-04-24 10:23:52.135340] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135344] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2730) on tqpair=0x237a9e0 00:28:39.025 [2024-04-24 10:23:52.135355] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:39.025 [2024-04-24 10:23:52.135363] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:39.025 [2024-04-24 10:23:52.135370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135373] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135376] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.025 [2024-04-24 10:23:52.135383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.025 [2024-04-24 10:23:52.135393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.025 [2024-04-24 10:23:52.135493] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.025 [2024-04-24 10:23:52.135499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.025 [2024-04-24 10:23:52.135502] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135505] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2730) on tqpair=0x237a9e0 00:28:39.025 [2024-04-24 10:23:52.135510] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:39.025 [2024-04-24 10:23:52.135518] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:39.025 [2024-04-24 10:23:52.135524] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135527] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135530] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.025 [2024-04-24 10:23:52.135536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.025 [2024-04-24 10:23:52.135545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.025 [2024-04-24 10:23:52.135631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.025 [2024-04-24 10:23:52.135637] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.025 [2024-04-24 10:23:52.135640] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135643] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2730) on tqpair=0x237a9e0 00:28:39.025 [2024-04-24 10:23:52.135648] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:39.025 [2024-04-24 10:23:52.135656] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135660] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.025 [2024-04-24 10:23:52.135668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.025 [2024-04-24 10:23:52.135677] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.025 [2024-04-24 10:23:52.135765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.025 [2024-04-24 10:23:52.135771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.025 [2024-04-24 10:23:52.135774] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135777] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2730) on tqpair=0x237a9e0 00:28:39.025 [2024-04-24 10:23:52.135782] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:39.025 [2024-04-24 10:23:52.135786] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:39.025 [2024-04-24 10:23:52.135796] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:39.025 [2024-04-24 10:23:52.135901] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:39.025 [2024-04-24 10:23:52.135905] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:39.025 [2024-04-24 10:23:52.135912] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.135918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.025 [2024-04-24 10:23:52.135925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.025 [2024-04-24 10:23:52.135935] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.025 [2024-04-24 10:23:52.136020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.025 [2024-04-24 10:23:52.136026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.025 [2024-04-24 10:23:52.136029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.136032] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2730) on tqpair=0x237a9e0 00:28:39.025 [2024-04-24 10:23:52.136037] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:39.025 [2024-04-24 10:23:52.136044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.136048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.025 [2024-04-24 10:23:52.136051] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.025 [2024-04-24 10:23:52.136057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.026 [2024-04-24 10:23:52.136065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.026 [2024-04-24 10:23:52.136155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.026 [2024-04-24 10:23:52.136161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.026 [2024-04-24 10:23:52.136164] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.136167] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2730) on tqpair=0x237a9e0 00:28:39.026 [2024-04-24 10:23:52.136173] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:39.026 [2024-04-24 10:23:52.136177] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:39.026 [2024-04-24 10:23:52.136184] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:39.026 [2024-04-24 10:23:52.136195] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:39.026 [2024-04-24 10:23:52.136203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.136207] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.136210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.136216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.026 [2024-04-24 10:23:52.136227] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.026 [2024-04-24 10:23:52.136342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.026 [2024-04-24 10:23:52.136348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.026 [2024-04-24 10:23:52.136351] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.136354] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237a9e0): datao=0, datal=4096, cccid=0 00:28:39.026 [2024-04-24 10:23:52.136358] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23e2730) on tqpair(0x237a9e0): expected_datao=0, payload_size=4096 00:28:39.026 [2024-04-24 10:23:52.136394] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.136399] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177236] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.026 [2024-04-24 10:23:52.177249] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.026 [2024-04-24 10:23:52.177253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177257] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2730) on tqpair=0x237a9e0 00:28:39.026 [2024-04-24 10:23:52.177266] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:39.026 [2024-04-24 10:23:52.177274] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:39.026 [2024-04-24 10:23:52.177278] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:39.026 [2024-04-24 10:23:52.177282] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:39.026 [2024-04-24 10:23:52.177286] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:39.026 [2024-04-24 10:23:52.177290] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:39.026 [2024-04-24 10:23:52.177299] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:39.026 [2024-04-24 10:23:52.177305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177309] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.177319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:39.026 [2024-04-24 10:23:52.177331] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.026 [2024-04-24 10:23:52.177421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.026 [2024-04-24 10:23:52.177426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.026 [2024-04-24 10:23:52.177429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177432] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2730) on tqpair=0x237a9e0 00:28:39.026 [2024-04-24 10:23:52.177439] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177442] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177445] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.177451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.026 [2024-04-24 10:23:52.177456] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177459] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177462] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.177469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.026 [2024-04-24 10:23:52.177474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.177485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.026 [2024-04-24 10:23:52.177490] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.177501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.026 [2024-04-24 10:23:52.177505] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:39.026 [2024-04-24 10:23:52.177515] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:39.026 [2024-04-24 10:23:52.177521] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177524] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177527] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.177533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.026 [2024-04-24 10:23:52.177544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2730, cid 0, qid 0 00:28:39.026 [2024-04-24 10:23:52.177548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2890, cid 1, qid 0 00:28:39.026 [2024-04-24 10:23:52.177552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e29f0, cid 2, qid 0 00:28:39.026 [2024-04-24 10:23:52.177556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.026 [2024-04-24 10:23:52.177560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2cb0, cid 4, qid 0 00:28:39.026 [2024-04-24 10:23:52.177699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.026 [2024-04-24 10:23:52.177704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.026 [2024-04-24 10:23:52.177707] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177711] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2cb0) on tqpair=0x237a9e0 00:28:39.026 [2024-04-24 10:23:52.177716] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:39.026 [2024-04-24 10:23:52.177720] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:39.026 [2024-04-24 10:23:52.177730] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177734] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.177742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.026 [2024-04-24 10:23:52.177751] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2cb0, cid 4, qid 0 00:28:39.026 [2024-04-24 10:23:52.177845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.026 [2024-04-24 10:23:52.177850] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.026 [2024-04-24 10:23:52.177855] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177859] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237a9e0): datao=0, datal=4096, cccid=4 00:28:39.026 [2024-04-24 10:23:52.177863] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23e2cb0) on tqpair(0x237a9e0): expected_datao=0, payload_size=4096 00:28:39.026 [2024-04-24 10:23:52.177869] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177872] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177948] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.026 [2024-04-24 10:23:52.177954] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.026 [2024-04-24 10:23:52.177957] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177960] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2cb0) on tqpair=0x237a9e0 00:28:39.026 [2024-04-24 10:23:52.177971] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:39.026 [2024-04-24 10:23:52.177988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177992] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.177995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.178001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.026 [2024-04-24 10:23:52.178007] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.178010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.026 [2024-04-24 10:23:52.178013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x237a9e0) 00:28:39.026 [2024-04-24 10:23:52.178018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.026 [2024-04-24 10:23:52.178033] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2cb0, cid 4, qid 0 00:28:39.027 [2024-04-24 10:23:52.178038] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2e10, cid 5, qid 0 00:28:39.027 [2024-04-24 10:23:52.182079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.027 [2024-04-24 10:23:52.182086] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.027 [2024-04-24 10:23:52.182089] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.182092] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237a9e0): datao=0, datal=1024, cccid=4 00:28:39.027 [2024-04-24 10:23:52.182096] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23e2cb0) on tqpair(0x237a9e0): expected_datao=0, payload_size=1024 00:28:39.027 [2024-04-24 10:23:52.182102] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.182105] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.182110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.027 [2024-04-24 10:23:52.182115] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.027 [2024-04-24 10:23:52.182118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.182121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2e10) on tqpair=0x237a9e0 00:28:39.027 [2024-04-24 10:23:52.222080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.027 [2024-04-24 10:23:52.222089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.027 [2024-04-24 10:23:52.222093] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2cb0) on tqpair=0x237a9e0 00:28:39.027 [2024-04-24 10:23:52.222107] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222116] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222119] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237a9e0) 00:28:39.027 [2024-04-24 10:23:52.222125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.027 [2024-04-24 10:23:52.222141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2cb0, cid 4, qid 0 00:28:39.027 [2024-04-24 10:23:52.222306] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.027 [2024-04-24 10:23:52.222312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.027 [2024-04-24 10:23:52.222315] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222318] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237a9e0): datao=0, datal=3072, cccid=4 00:28:39.027 [2024-04-24 10:23:52.222322] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23e2cb0) on tqpair(0x237a9e0): expected_datao=0, payload_size=3072 00:28:39.027 [2024-04-24 10:23:52.222362] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222367] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222463] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.027 [2024-04-24 10:23:52.222470] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.027 [2024-04-24 10:23:52.222472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222475] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2cb0) on tqpair=0x237a9e0 00:28:39.027 [2024-04-24 10:23:52.222484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222492] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237a9e0) 00:28:39.027 [2024-04-24 10:23:52.222498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.027 [2024-04-24 10:23:52.222512] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2cb0, cid 4, qid 0 00:28:39.027 [2024-04-24 10:23:52.222605] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.027 [2024-04-24 10:23:52.222611] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.027 [2024-04-24 10:23:52.222616] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222620] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237a9e0): datao=0, datal=8, cccid=4 00:28:39.027 [2024-04-24 10:23:52.222625] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23e2cb0) on tqpair(0x237a9e0): expected_datao=0, payload_size=8 00:28:39.027 [2024-04-24 10:23:52.222631] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.222635] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.263273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.027 [2024-04-24 10:23:52.263288] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.027 [2024-04-24 10:23:52.263291] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.027 [2024-04-24 10:23:52.263294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2cb0) on tqpair=0x237a9e0 00:28:39.027 ===================================================== 00:28:39.027 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:39.027 ===================================================== 00:28:39.027 Controller Capabilities/Features 00:28:39.027 ================================ 00:28:39.027 Vendor ID: 0000 00:28:39.027 Subsystem Vendor ID: 0000 00:28:39.027 Serial Number: .................... 00:28:39.027 Model Number: ........................................ 00:28:39.027 Firmware Version: 24.01.1 00:28:39.027 Recommended Arb Burst: 0 00:28:39.027 IEEE OUI Identifier: 00 00 00 00:28:39.027 Multi-path I/O 00:28:39.027 May have multiple subsystem ports: No 00:28:39.027 May have multiple controllers: No 00:28:39.027 Associated with SR-IOV VF: No 00:28:39.027 Max Data Transfer Size: 131072 00:28:39.027 Max Number of Namespaces: 0 00:28:39.027 Max Number of I/O Queues: 1024 00:28:39.027 NVMe Specification Version (VS): 1.3 00:28:39.027 NVMe Specification Version (Identify): 1.3 00:28:39.027 Maximum Queue Entries: 128 00:28:39.027 Contiguous Queues Required: Yes 00:28:39.027 Arbitration Mechanisms Supported 00:28:39.027 Weighted Round Robin: Not Supported 00:28:39.027 Vendor Specific: Not Supported 00:28:39.027 Reset Timeout: 15000 ms 00:28:39.027 Doorbell Stride: 4 bytes 00:28:39.027 NVM Subsystem Reset: Not Supported 00:28:39.027 Command Sets Supported 00:28:39.027 NVM Command Set: Supported 00:28:39.027 Boot Partition: Not Supported 00:28:39.027 Memory Page Size Minimum: 4096 bytes 00:28:39.027 Memory Page Size Maximum: 4096 bytes 00:28:39.027 Persistent Memory Region: Not Supported 00:28:39.027 Optional Asynchronous Events Supported 00:28:39.027 Namespace Attribute Notices: Not Supported 00:28:39.027 Firmware Activation Notices: Not Supported 00:28:39.027 ANA Change Notices: Not Supported 00:28:39.027 PLE Aggregate Log Change Notices: Not Supported 00:28:39.027 LBA Status Info Alert Notices: Not Supported 00:28:39.027 EGE Aggregate Log Change Notices: Not Supported 00:28:39.027 Normal NVM Subsystem Shutdown event: Not Supported 00:28:39.027 Zone Descriptor Change Notices: Not Supported 00:28:39.027 Discovery Log Change Notices: Supported 00:28:39.027 Controller Attributes 00:28:39.027 128-bit Host Identifier: Not Supported 00:28:39.027 Non-Operational Permissive Mode: Not Supported 00:28:39.027 NVM Sets: Not Supported 00:28:39.027 Read Recovery Levels: Not Supported 00:28:39.027 Endurance Groups: Not Supported 00:28:39.027 Predictable Latency Mode: Not Supported 00:28:39.027 Traffic Based Keep ALive: Not Supported 00:28:39.027 Namespace Granularity: Not Supported 00:28:39.027 SQ Associations: Not Supported 00:28:39.027 UUID List: Not Supported 00:28:39.027 Multi-Domain Subsystem: Not Supported 00:28:39.027 Fixed Capacity Management: Not Supported 00:28:39.027 Variable Capacity Management: Not Supported 00:28:39.027 Delete Endurance Group: Not Supported 00:28:39.027 Delete NVM Set: Not Supported 00:28:39.027 Extended LBA Formats Supported: Not Supported 00:28:39.027 Flexible Data Placement Supported: Not Supported 00:28:39.027 00:28:39.027 Controller Memory Buffer Support 00:28:39.027 ================================ 00:28:39.027 Supported: No 00:28:39.027 00:28:39.027 Persistent Memory Region Support 00:28:39.027 ================================ 00:28:39.027 Supported: No 00:28:39.027 00:28:39.027 Admin Command Set Attributes 00:28:39.027 ============================ 00:28:39.027 Security Send/Receive: Not Supported 00:28:39.027 Format NVM: Not Supported 00:28:39.027 Firmware Activate/Download: Not Supported 00:28:39.027 Namespace Management: Not Supported 00:28:39.027 Device Self-Test: Not Supported 00:28:39.027 Directives: Not Supported 00:28:39.027 NVMe-MI: Not Supported 00:28:39.027 Virtualization Management: Not Supported 00:28:39.027 Doorbell Buffer Config: Not Supported 00:28:39.027 Get LBA Status Capability: Not Supported 00:28:39.027 Command & Feature Lockdown Capability: Not Supported 00:28:39.027 Abort Command Limit: 1 00:28:39.027 Async Event Request Limit: 4 00:28:39.027 Number of Firmware Slots: N/A 00:28:39.027 Firmware Slot 1 Read-Only: N/A 00:28:39.027 Firmware Activation Without Reset: N/A 00:28:39.027 Multiple Update Detection Support: N/A 00:28:39.027 Firmware Update Granularity: No Information Provided 00:28:39.027 Per-Namespace SMART Log: No 00:28:39.027 Asymmetric Namespace Access Log Page: Not Supported 00:28:39.027 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:39.027 Command Effects Log Page: Not Supported 00:28:39.027 Get Log Page Extended Data: Supported 00:28:39.027 Telemetry Log Pages: Not Supported 00:28:39.027 Persistent Event Log Pages: Not Supported 00:28:39.027 Supported Log Pages Log Page: May Support 00:28:39.027 Commands Supported & Effects Log Page: Not Supported 00:28:39.027 Feature Identifiers & Effects Log Page:May Support 00:28:39.027 NVMe-MI Commands & Effects Log Page: May Support 00:28:39.027 Data Area 4 for Telemetry Log: Not Supported 00:28:39.027 Error Log Page Entries Supported: 128 00:28:39.027 Keep Alive: Not Supported 00:28:39.027 00:28:39.028 NVM Command Set Attributes 00:28:39.028 ========================== 00:28:39.028 Submission Queue Entry Size 00:28:39.028 Max: 1 00:28:39.028 Min: 1 00:28:39.028 Completion Queue Entry Size 00:28:39.028 Max: 1 00:28:39.028 Min: 1 00:28:39.028 Number of Namespaces: 0 00:28:39.028 Compare Command: Not Supported 00:28:39.028 Write Uncorrectable Command: Not Supported 00:28:39.028 Dataset Management Command: Not Supported 00:28:39.028 Write Zeroes Command: Not Supported 00:28:39.028 Set Features Save Field: Not Supported 00:28:39.028 Reservations: Not Supported 00:28:39.028 Timestamp: Not Supported 00:28:39.028 Copy: Not Supported 00:28:39.028 Volatile Write Cache: Not Present 00:28:39.028 Atomic Write Unit (Normal): 1 00:28:39.028 Atomic Write Unit (PFail): 1 00:28:39.028 Atomic Compare & Write Unit: 1 00:28:39.028 Fused Compare & Write: Supported 00:28:39.028 Scatter-Gather List 00:28:39.028 SGL Command Set: Supported 00:28:39.028 SGL Keyed: Supported 00:28:39.028 SGL Bit Bucket Descriptor: Not Supported 00:28:39.028 SGL Metadata Pointer: Not Supported 00:28:39.028 Oversized SGL: Not Supported 00:28:39.028 SGL Metadata Address: Not Supported 00:28:39.028 SGL Offset: Supported 00:28:39.028 Transport SGL Data Block: Not Supported 00:28:39.028 Replay Protected Memory Block: Not Supported 00:28:39.028 00:28:39.028 Firmware Slot Information 00:28:39.028 ========================= 00:28:39.028 Active slot: 0 00:28:39.028 00:28:39.028 00:28:39.028 Error Log 00:28:39.028 ========= 00:28:39.028 00:28:39.028 Active Namespaces 00:28:39.028 ================= 00:28:39.028 Discovery Log Page 00:28:39.028 ================== 00:28:39.028 Generation Counter: 2 00:28:39.028 Number of Records: 2 00:28:39.028 Record Format: 0 00:28:39.028 00:28:39.028 Discovery Log Entry 0 00:28:39.028 ---------------------- 00:28:39.028 Transport Type: 3 (TCP) 00:28:39.028 Address Family: 1 (IPv4) 00:28:39.028 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:39.028 Entry Flags: 00:28:39.028 Duplicate Returned Information: 1 00:28:39.028 Explicit Persistent Connection Support for Discovery: 1 00:28:39.028 Transport Requirements: 00:28:39.028 Secure Channel: Not Required 00:28:39.028 Port ID: 0 (0x0000) 00:28:39.028 Controller ID: 65535 (0xffff) 00:28:39.028 Admin Max SQ Size: 128 00:28:39.028 Transport Service Identifier: 4420 00:28:39.028 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:39.028 Transport Address: 10.0.0.2 00:28:39.028 Discovery Log Entry 1 00:28:39.028 ---------------------- 00:28:39.028 Transport Type: 3 (TCP) 00:28:39.028 Address Family: 1 (IPv4) 00:28:39.028 Subsystem Type: 2 (NVM Subsystem) 00:28:39.028 Entry Flags: 00:28:39.028 Duplicate Returned Information: 0 00:28:39.028 Explicit Persistent Connection Support for Discovery: 0 00:28:39.028 Transport Requirements: 00:28:39.028 Secure Channel: Not Required 00:28:39.028 Port ID: 0 (0x0000) 00:28:39.028 Controller ID: 65535 (0xffff) 00:28:39.028 Admin Max SQ Size: 128 00:28:39.028 Transport Service Identifier: 4420 00:28:39.028 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:39.028 Transport Address: 10.0.0.2 [2024-04-24 10:23:52.263374] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:39.028 [2024-04-24 10:23:52.263386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.028 [2024-04-24 10:23:52.263392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.028 [2024-04-24 10:23:52.263397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.028 [2024-04-24 10:23:52.263404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.028 [2024-04-24 10:23:52.263414] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.028 [2024-04-24 10:23:52.263428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.028 [2024-04-24 10:23:52.263441] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.028 [2024-04-24 10:23:52.263529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.028 [2024-04-24 10:23:52.263536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.028 [2024-04-24 10:23:52.263539] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263542] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.028 [2024-04-24 10:23:52.263548] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263554] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.028 [2024-04-24 10:23:52.263560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.028 [2024-04-24 10:23:52.263573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.028 [2024-04-24 10:23:52.263680] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.028 [2024-04-24 10:23:52.263685] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.028 [2024-04-24 10:23:52.263688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.028 [2024-04-24 10:23:52.263696] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:39.028 [2024-04-24 10:23:52.263700] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:39.028 [2024-04-24 10:23:52.263708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263712] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263715] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.028 [2024-04-24 10:23:52.263720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.028 [2024-04-24 10:23:52.263730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.028 [2024-04-24 10:23:52.263812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.028 [2024-04-24 10:23:52.263817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.028 [2024-04-24 10:23:52.263820] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.028 [2024-04-24 10:23:52.263832] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263836] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263839] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.028 [2024-04-24 10:23:52.263844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.028 [2024-04-24 10:23:52.263853] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.028 [2024-04-24 10:23:52.263983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.028 [2024-04-24 10:23:52.263989] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.028 [2024-04-24 10:23:52.263992] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.263995] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.028 [2024-04-24 10:23:52.264004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.264007] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.264010] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.028 [2024-04-24 10:23:52.264016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.028 [2024-04-24 10:23:52.264025] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.028 [2024-04-24 10:23:52.264134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.028 [2024-04-24 10:23:52.264140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.028 [2024-04-24 10:23:52.264143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.264146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.028 [2024-04-24 10:23:52.264155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.264159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.264161] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.028 [2024-04-24 10:23:52.264167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.028 [2024-04-24 10:23:52.264177] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.028 [2024-04-24 10:23:52.264285] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.028 [2024-04-24 10:23:52.264290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.028 [2024-04-24 10:23:52.264293] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.264296] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.028 [2024-04-24 10:23:52.264305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.264308] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.028 [2024-04-24 10:23:52.264311] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.264317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.264326] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.264412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.264418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.264421] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.029 [2024-04-24 10:23:52.264432] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.264445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.264454] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.264586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.264594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.264597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264600] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.029 [2024-04-24 10:23:52.264608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264615] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.264621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.264629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.264738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.264744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.264747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.029 [2024-04-24 10:23:52.264758] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.264771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.264780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.264888] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.264893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.264896] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264899] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.029 [2024-04-24 10:23:52.264908] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264912] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.264915] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.264920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.264929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.265015] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.265020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.265023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.029 [2024-04-24 10:23:52.265034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.265046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.265055] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.265193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.265199] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.265204] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265207] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.029 [2024-04-24 10:23:52.265216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265219] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265222] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.265228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.265238] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.265341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.265347] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.265350] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265353] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.029 [2024-04-24 10:23:52.265361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265365] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.265373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.265382] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.265494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.265500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.265503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.029 [2024-04-24 10:23:52.265515] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.265527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.265536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.265620] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.265625] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.265628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265631] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.029 [2024-04-24 10:23:52.265639] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265643] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.029 [2024-04-24 10:23:52.265646] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.029 [2024-04-24 10:23:52.265651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.029 [2024-04-24 10:23:52.265660] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.029 [2024-04-24 10:23:52.265796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.029 [2024-04-24 10:23:52.265801] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.029 [2024-04-24 10:23:52.265804] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.265812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.030 [2024-04-24 10:23:52.265821] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.265824] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.265827] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.030 [2024-04-24 10:23:52.265833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.030 [2024-04-24 10:23:52.265842] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.030 [2024-04-24 10:23:52.265947] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.030 [2024-04-24 10:23:52.265953] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.030 [2024-04-24 10:23:52.265956] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.265959] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.030 [2024-04-24 10:23:52.265968] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.265971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.265974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.030 [2024-04-24 10:23:52.265980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.030 [2024-04-24 10:23:52.265989] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.030 [2024-04-24 10:23:52.270078] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.030 [2024-04-24 10:23:52.270087] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.030 [2024-04-24 10:23:52.270090] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.270093] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.030 [2024-04-24 10:23:52.270103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.270107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.270110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237a9e0) 00:28:39.030 [2024-04-24 10:23:52.270116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.030 [2024-04-24 10:23:52.270128] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23e2b50, cid 3, qid 0 00:28:39.030 [2024-04-24 10:23:52.270300] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.030 [2024-04-24 10:23:52.270306] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.030 [2024-04-24 10:23:52.270309] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.030 [2024-04-24 10:23:52.270312] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23e2b50) on tqpair=0x237a9e0 00:28:39.030 [2024-04-24 10:23:52.270320] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:39.030 00:28:39.030 10:23:52 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:39.290 [2024-04-24 10:23:52.304966] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:39.290 [2024-04-24 10:23:52.305012] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435049 ] 00:28:39.290 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.290 [2024-04-24 10:23:52.333299] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:39.290 [2024-04-24 10:23:52.333341] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:39.290 [2024-04-24 10:23:52.333345] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:39.290 [2024-04-24 10:23:52.333356] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:39.291 [2024-04-24 10:23:52.333362] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:39.291 [2024-04-24 10:23:52.333729] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:39.291 [2024-04-24 10:23:52.333751] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c3c9e0 0 00:28:39.291 [2024-04-24 10:23:52.348080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:39.291 [2024-04-24 10:23:52.348095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:39.291 [2024-04-24 10:23:52.348099] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:39.291 [2024-04-24 10:23:52.348102] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:39.291 [2024-04-24 10:23:52.348132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.348137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.348141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.348151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:39.291 [2024-04-24 10:23:52.348166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.355080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.355089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.355092] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4730) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.355107] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:39.291 [2024-04-24 10:23:52.355113] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:39.291 [2024-04-24 10:23:52.355118] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:39.291 [2024-04-24 10:23:52.355130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.355144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.291 [2024-04-24 10:23:52.355156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.355337] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.355344] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.355347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355350] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4730) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.355358] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:39.291 [2024-04-24 10:23:52.355366] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:39.291 [2024-04-24 10:23:52.355375] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355378] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355381] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.355388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.291 [2024-04-24 10:23:52.355399] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.355483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.355489] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.355492] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355496] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4730) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.355501] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:39.291 [2024-04-24 10:23:52.355508] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:39.291 [2024-04-24 10:23:52.355514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.355526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.291 [2024-04-24 10:23:52.355536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.355619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.355625] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.355628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355631] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4730) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.355636] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:39.291 [2024-04-24 10:23:52.355644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355648] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355651] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.355656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.291 [2024-04-24 10:23:52.355665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.355748] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.355754] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.355757] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355760] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4730) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.355765] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:39.291 [2024-04-24 10:23:52.355769] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:39.291 [2024-04-24 10:23:52.355776] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:39.291 [2024-04-24 10:23:52.355881] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:39.291 [2024-04-24 10:23:52.355887] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:39.291 [2024-04-24 10:23:52.355893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.355900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.355906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.291 [2024-04-24 10:23:52.355916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.355998] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.356004] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.356007] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4730) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.356015] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:39.291 [2024-04-24 10:23:52.356023] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356027] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356030] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.356036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.291 [2024-04-24 10:23:52.356045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.356133] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.356139] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.356142] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356145] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4730) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.356150] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:39.291 [2024-04-24 10:23:52.356154] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.356162] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:39.291 [2024-04-24 10:23:52.356169] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.356176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356180] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356183] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.356189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.291 [2024-04-24 10:23:52.356199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.356317] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.291 [2024-04-24 10:23:52.356323] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.291 [2024-04-24 10:23:52.356326] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356329] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3c9e0): datao=0, datal=4096, cccid=0 00:28:39.291 [2024-04-24 10:23:52.356336] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca4730) on tqpair(0x1c3c9e0): expected_datao=0, payload_size=4096 00:28:39.291 [2024-04-24 10:23:52.356343] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356346] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356379] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.356385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.356388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356391] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4730) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.356398] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:39.291 [2024-04-24 10:23:52.356405] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:39.291 [2024-04-24 10:23:52.356409] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:39.291 [2024-04-24 10:23:52.356412] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:39.291 [2024-04-24 10:23:52.356416] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:39.291 [2024-04-24 10:23:52.356420] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.356429] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.356435] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356441] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.356447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:39.291 [2024-04-24 10:23:52.356459] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.356548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.356554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.356557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4730) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.356567] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356570] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356573] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.356579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.291 [2024-04-24 10:23:52.356584] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356587] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356590] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.356595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.291 [2024-04-24 10:23:52.356600] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356603] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356606] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.356613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.291 [2024-04-24 10:23:52.356618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356622] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.356629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.291 [2024-04-24 10:23:52.356634] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.356643] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.356649] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356652] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356655] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.356661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.291 [2024-04-24 10:23:52.356672] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4730, cid 0, qid 0 00:28:39.291 [2024-04-24 10:23:52.356677] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4890, cid 1, qid 0 00:28:39.291 [2024-04-24 10:23:52.356681] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca49f0, cid 2, qid 0 00:28:39.291 [2024-04-24 10:23:52.356685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.291 [2024-04-24 10:23:52.356689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4cb0, cid 4, qid 0 00:28:39.291 [2024-04-24 10:23:52.356811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.356817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.356820] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4cb0) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.356828] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:39.291 [2024-04-24 10:23:52.356832] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.356840] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.356845] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.356850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356854] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356857] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.356863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:39.291 [2024-04-24 10:23:52.356872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4cb0, cid 4, qid 0 00:28:39.291 [2024-04-24 10:23:52.356956] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.291 [2024-04-24 10:23:52.356961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.291 [2024-04-24 10:23:52.356964] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.356968] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4cb0) on tqpair=0x1c3c9e0 00:28:39.291 [2024-04-24 10:23:52.357013] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.357023] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:39.291 [2024-04-24 10:23:52.357029] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.357033] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.291 [2024-04-24 10:23:52.357036] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3c9e0) 00:28:39.291 [2024-04-24 10:23:52.357041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.291 [2024-04-24 10:23:52.357051] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4cb0, cid 4, qid 0 00:28:39.292 [2024-04-24 10:23:52.357148] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.292 [2024-04-24 10:23:52.357154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.292 [2024-04-24 10:23:52.357157] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357161] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3c9e0): datao=0, datal=4096, cccid=4 00:28:39.292 [2024-04-24 10:23:52.357165] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca4cb0) on tqpair(0x1c3c9e0): expected_datao=0, payload_size=4096 00:28:39.292 [2024-04-24 10:23:52.357202] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357207] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357263] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.357269] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.357272] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357275] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4cb0) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.357284] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:39.292 [2024-04-24 10:23:52.357294] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357302] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357315] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.357321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.357332] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4cb0, cid 4, qid 0 00:28:39.292 [2024-04-24 10:23:52.357458] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.292 [2024-04-24 10:23:52.357464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.292 [2024-04-24 10:23:52.357467] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357470] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3c9e0): datao=0, datal=4096, cccid=4 00:28:39.292 [2024-04-24 10:23:52.357474] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca4cb0) on tqpair(0x1c3c9e0): expected_datao=0, payload_size=4096 00:28:39.292 [2024-04-24 10:23:52.357481] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357484] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.357525] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.357528] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357531] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4cb0) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.357543] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357552] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357562] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357565] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.357571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.357583] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4cb0, cid 4, qid 0 00:28:39.292 [2024-04-24 10:23:52.357682] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.292 [2024-04-24 10:23:52.357688] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.292 [2024-04-24 10:23:52.357691] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357694] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3c9e0): datao=0, datal=4096, cccid=4 00:28:39.292 [2024-04-24 10:23:52.357698] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca4cb0) on tqpair(0x1c3c9e0): expected_datao=0, payload_size=4096 00:28:39.292 [2024-04-24 10:23:52.357704] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357707] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.357750] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.357753] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357757] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4cb0) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.357763] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357770] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357778] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357783] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357787] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357791] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:39.292 [2024-04-24 10:23:52.357795] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:39.292 [2024-04-24 10:23:52.357800] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:39.292 [2024-04-24 10:23:52.357811] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357815] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357818] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.357825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.357831] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357837] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.357842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.292 [2024-04-24 10:23:52.357855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4cb0, cid 4, qid 0 00:28:39.292 [2024-04-24 10:23:52.357860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4e10, cid 5, qid 0 00:28:39.292 [2024-04-24 10:23:52.357962] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.357968] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.357971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4cb0) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.357981] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.357986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.357989] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.357992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4e10) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.358001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.358013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.358023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4e10, cid 5, qid 0 00:28:39.292 [2024-04-24 10:23:52.358120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.358126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.358130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358133] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4e10) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.358141] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.358153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.358163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4e10, cid 5, qid 0 00:28:39.292 [2024-04-24 10:23:52.358249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.358255] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.358258] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4e10) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.358270] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358276] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.358284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.358294] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4e10, cid 5, qid 0 00:28:39.292 [2024-04-24 10:23:52.358379] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.358385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.358388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358391] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4e10) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.358402] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358406] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358409] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.358414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.358420] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358424] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358427] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.358432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.358438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358444] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.358449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.358455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358459] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358462] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c3c9e0) 00:28:39.292 [2024-04-24 10:23:52.358467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.292 [2024-04-24 10:23:52.358477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4e10, cid 5, qid 0 00:28:39.292 [2024-04-24 10:23:52.358481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4cb0, cid 4, qid 0 00:28:39.292 [2024-04-24 10:23:52.358486] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4f70, cid 6, qid 0 00:28:39.292 [2024-04-24 10:23:52.358490] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca50d0, cid 7, qid 0 00:28:39.292 [2024-04-24 10:23:52.358644] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.292 [2024-04-24 10:23:52.358650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.292 [2024-04-24 10:23:52.358653] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358656] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3c9e0): datao=0, datal=8192, cccid=5 00:28:39.292 [2024-04-24 10:23:52.358660] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca4e10) on tqpair(0x1c3c9e0): expected_datao=0, payload_size=8192 00:28:39.292 [2024-04-24 10:23:52.358726] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358730] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.292 [2024-04-24 10:23:52.358745] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.292 [2024-04-24 10:23:52.358749] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358752] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3c9e0): datao=0, datal=512, cccid=4 00:28:39.292 [2024-04-24 10:23:52.358755] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca4cb0) on tqpair(0x1c3c9e0): expected_datao=0, payload_size=512 00:28:39.292 [2024-04-24 10:23:52.358762] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358765] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.292 [2024-04-24 10:23:52.358774] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.292 [2024-04-24 10:23:52.358777] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358780] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3c9e0): datao=0, datal=512, cccid=6 00:28:39.292 [2024-04-24 10:23:52.358784] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca4f70) on tqpair(0x1c3c9e0): expected_datao=0, payload_size=512 00:28:39.292 [2024-04-24 10:23:52.358790] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358793] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358798] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:39.292 [2024-04-24 10:23:52.358803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:39.292 [2024-04-24 10:23:52.358806] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358809] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3c9e0): datao=0, datal=4096, cccid=7 00:28:39.292 [2024-04-24 10:23:52.358813] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca50d0) on tqpair(0x1c3c9e0): expected_datao=0, payload_size=4096 00:28:39.292 [2024-04-24 10:23:52.358819] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358822] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358852] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.358857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.358860] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4e10) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.358876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.358881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.358884] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358888] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4cb0) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.358895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.358900] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.358904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4f70) on tqpair=0x1c3c9e0 00:28:39.292 [2024-04-24 10:23:52.358913] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.292 [2024-04-24 10:23:52.358918] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.292 [2024-04-24 10:23:52.358921] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.292 [2024-04-24 10:23:52.358925] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca50d0) on tqpair=0x1c3c9e0 00:28:39.292 ===================================================== 00:28:39.292 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.292 ===================================================== 00:28:39.292 Controller Capabilities/Features 00:28:39.292 ================================ 00:28:39.292 Vendor ID: 8086 00:28:39.292 Subsystem Vendor ID: 8086 00:28:39.292 Serial Number: SPDK00000000000001 00:28:39.292 Model Number: SPDK bdev Controller 00:28:39.292 Firmware Version: 24.01.1 00:28:39.292 Recommended Arb Burst: 6 00:28:39.292 IEEE OUI Identifier: e4 d2 5c 00:28:39.292 Multi-path I/O 00:28:39.292 May have multiple subsystem ports: Yes 00:28:39.292 May have multiple controllers: Yes 00:28:39.292 Associated with SR-IOV VF: No 00:28:39.292 Max Data Transfer Size: 131072 00:28:39.292 Max Number of Namespaces: 32 00:28:39.292 Max Number of I/O Queues: 127 00:28:39.292 NVMe Specification Version (VS): 1.3 00:28:39.292 NVMe Specification Version (Identify): 1.3 00:28:39.292 Maximum Queue Entries: 128 00:28:39.292 Contiguous Queues Required: Yes 00:28:39.292 Arbitration Mechanisms Supported 00:28:39.292 Weighted Round Robin: Not Supported 00:28:39.292 Vendor Specific: Not Supported 00:28:39.292 Reset Timeout: 15000 ms 00:28:39.292 Doorbell Stride: 4 bytes 00:28:39.292 NVM Subsystem Reset: Not Supported 00:28:39.292 Command Sets Supported 00:28:39.292 NVM Command Set: Supported 00:28:39.292 Boot Partition: Not Supported 00:28:39.292 Memory Page Size Minimum: 4096 bytes 00:28:39.292 Memory Page Size Maximum: 4096 bytes 00:28:39.292 Persistent Memory Region: Not Supported 00:28:39.292 Optional Asynchronous Events Supported 00:28:39.292 Namespace Attribute Notices: Supported 00:28:39.292 Firmware Activation Notices: Not Supported 00:28:39.292 ANA Change Notices: Not Supported 00:28:39.292 PLE Aggregate Log Change Notices: Not Supported 00:28:39.292 LBA Status Info Alert Notices: Not Supported 00:28:39.292 EGE Aggregate Log Change Notices: Not Supported 00:28:39.293 Normal NVM Subsystem Shutdown event: Not Supported 00:28:39.293 Zone Descriptor Change Notices: Not Supported 00:28:39.293 Discovery Log Change Notices: Not Supported 00:28:39.293 Controller Attributes 00:28:39.293 128-bit Host Identifier: Supported 00:28:39.293 Non-Operational Permissive Mode: Not Supported 00:28:39.293 NVM Sets: Not Supported 00:28:39.293 Read Recovery Levels: Not Supported 00:28:39.293 Endurance Groups: Not Supported 00:28:39.293 Predictable Latency Mode: Not Supported 00:28:39.293 Traffic Based Keep ALive: Not Supported 00:28:39.293 Namespace Granularity: Not Supported 00:28:39.293 SQ Associations: Not Supported 00:28:39.293 UUID List: Not Supported 00:28:39.293 Multi-Domain Subsystem: Not Supported 00:28:39.293 Fixed Capacity Management: Not Supported 00:28:39.293 Variable Capacity Management: Not Supported 00:28:39.293 Delete Endurance Group: Not Supported 00:28:39.293 Delete NVM Set: Not Supported 00:28:39.293 Extended LBA Formats Supported: Not Supported 00:28:39.293 Flexible Data Placement Supported: Not Supported 00:28:39.293 00:28:39.293 Controller Memory Buffer Support 00:28:39.293 ================================ 00:28:39.293 Supported: No 00:28:39.293 00:28:39.293 Persistent Memory Region Support 00:28:39.293 ================================ 00:28:39.293 Supported: No 00:28:39.293 00:28:39.293 Admin Command Set Attributes 00:28:39.293 ============================ 00:28:39.293 Security Send/Receive: Not Supported 00:28:39.293 Format NVM: Not Supported 00:28:39.293 Firmware Activate/Download: Not Supported 00:28:39.293 Namespace Management: Not Supported 00:28:39.293 Device Self-Test: Not Supported 00:28:39.293 Directives: Not Supported 00:28:39.293 NVMe-MI: Not Supported 00:28:39.293 Virtualization Management: Not Supported 00:28:39.293 Doorbell Buffer Config: Not Supported 00:28:39.293 Get LBA Status Capability: Not Supported 00:28:39.293 Command & Feature Lockdown Capability: Not Supported 00:28:39.293 Abort Command Limit: 4 00:28:39.293 Async Event Request Limit: 4 00:28:39.293 Number of Firmware Slots: N/A 00:28:39.293 Firmware Slot 1 Read-Only: N/A 00:28:39.293 Firmware Activation Without Reset: N/A 00:28:39.293 Multiple Update Detection Support: N/A 00:28:39.293 Firmware Update Granularity: No Information Provided 00:28:39.293 Per-Namespace SMART Log: No 00:28:39.293 Asymmetric Namespace Access Log Page: Not Supported 00:28:39.293 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:39.293 Command Effects Log Page: Supported 00:28:39.293 Get Log Page Extended Data: Supported 00:28:39.293 Telemetry Log Pages: Not Supported 00:28:39.293 Persistent Event Log Pages: Not Supported 00:28:39.293 Supported Log Pages Log Page: May Support 00:28:39.293 Commands Supported & Effects Log Page: Not Supported 00:28:39.293 Feature Identifiers & Effects Log Page:May Support 00:28:39.293 NVMe-MI Commands & Effects Log Page: May Support 00:28:39.293 Data Area 4 for Telemetry Log: Not Supported 00:28:39.293 Error Log Page Entries Supported: 128 00:28:39.293 Keep Alive: Supported 00:28:39.293 Keep Alive Granularity: 10000 ms 00:28:39.293 00:28:39.293 NVM Command Set Attributes 00:28:39.293 ========================== 00:28:39.293 Submission Queue Entry Size 00:28:39.293 Max: 64 00:28:39.293 Min: 64 00:28:39.293 Completion Queue Entry Size 00:28:39.293 Max: 16 00:28:39.293 Min: 16 00:28:39.293 Number of Namespaces: 32 00:28:39.293 Compare Command: Supported 00:28:39.293 Write Uncorrectable Command: Not Supported 00:28:39.293 Dataset Management Command: Supported 00:28:39.293 Write Zeroes Command: Supported 00:28:39.293 Set Features Save Field: Not Supported 00:28:39.293 Reservations: Supported 00:28:39.293 Timestamp: Not Supported 00:28:39.293 Copy: Supported 00:28:39.293 Volatile Write Cache: Present 00:28:39.293 Atomic Write Unit (Normal): 1 00:28:39.293 Atomic Write Unit (PFail): 1 00:28:39.293 Atomic Compare & Write Unit: 1 00:28:39.293 Fused Compare & Write: Supported 00:28:39.293 Scatter-Gather List 00:28:39.293 SGL Command Set: Supported 00:28:39.293 SGL Keyed: Supported 00:28:39.293 SGL Bit Bucket Descriptor: Not Supported 00:28:39.293 SGL Metadata Pointer: Not Supported 00:28:39.293 Oversized SGL: Not Supported 00:28:39.293 SGL Metadata Address: Not Supported 00:28:39.293 SGL Offset: Supported 00:28:39.293 Transport SGL Data Block: Not Supported 00:28:39.293 Replay Protected Memory Block: Not Supported 00:28:39.293 00:28:39.293 Firmware Slot Information 00:28:39.293 ========================= 00:28:39.293 Active slot: 1 00:28:39.293 Slot 1 Firmware Revision: 24.01.1 00:28:39.293 00:28:39.293 00:28:39.293 Commands Supported and Effects 00:28:39.293 ============================== 00:28:39.293 Admin Commands 00:28:39.293 -------------- 00:28:39.293 Get Log Page (02h): Supported 00:28:39.293 Identify (06h): Supported 00:28:39.293 Abort (08h): Supported 00:28:39.293 Set Features (09h): Supported 00:28:39.293 Get Features (0Ah): Supported 00:28:39.293 Asynchronous Event Request (0Ch): Supported 00:28:39.293 Keep Alive (18h): Supported 00:28:39.293 I/O Commands 00:28:39.293 ------------ 00:28:39.293 Flush (00h): Supported LBA-Change 00:28:39.293 Write (01h): Supported LBA-Change 00:28:39.293 Read (02h): Supported 00:28:39.293 Compare (05h): Supported 00:28:39.293 Write Zeroes (08h): Supported LBA-Change 00:28:39.293 Dataset Management (09h): Supported LBA-Change 00:28:39.293 Copy (19h): Supported LBA-Change 00:28:39.293 Unknown (79h): Supported LBA-Change 00:28:39.293 Unknown (7Ah): Supported 00:28:39.293 00:28:39.293 Error Log 00:28:39.293 ========= 00:28:39.293 00:28:39.293 Arbitration 00:28:39.293 =========== 00:28:39.293 Arbitration Burst: 1 00:28:39.293 00:28:39.293 Power Management 00:28:39.293 ================ 00:28:39.293 Number of Power States: 1 00:28:39.293 Current Power State: Power State #0 00:28:39.293 Power State #0: 00:28:39.293 Max Power: 0.00 W 00:28:39.293 Non-Operational State: Operational 00:28:39.293 Entry Latency: Not Reported 00:28:39.293 Exit Latency: Not Reported 00:28:39.293 Relative Read Throughput: 0 00:28:39.293 Relative Read Latency: 0 00:28:39.293 Relative Write Throughput: 0 00:28:39.293 Relative Write Latency: 0 00:28:39.293 Idle Power: Not Reported 00:28:39.293 Active Power: Not Reported 00:28:39.293 Non-Operational Permissive Mode: Not Supported 00:28:39.293 00:28:39.293 Health Information 00:28:39.293 ================== 00:28:39.293 Critical Warnings: 00:28:39.293 Available Spare Space: OK 00:28:39.293 Temperature: OK 00:28:39.293 Device Reliability: OK 00:28:39.293 Read Only: No 00:28:39.293 Volatile Memory Backup: OK 00:28:39.293 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:39.293 Temperature Threshold: [2024-04-24 10:23:52.359011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.359015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.359020] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c3c9e0) 00:28:39.293 [2024-04-24 10:23:52.359026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.293 [2024-04-24 10:23:52.359038] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca50d0, cid 7, qid 0 00:28:39.293 [2024-04-24 10:23:52.363079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.293 [2024-04-24 10:23:52.363086] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.293 [2024-04-24 10:23:52.363089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363092] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca50d0) on tqpair=0x1c3c9e0 00:28:39.293 [2024-04-24 10:23:52.363120] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:39.293 [2024-04-24 10:23:52.363131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.293 [2024-04-24 10:23:52.363137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.293 [2024-04-24 10:23:52.363142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.293 [2024-04-24 10:23:52.363147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.293 [2024-04-24 10:23:52.363155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363162] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.293 [2024-04-24 10:23:52.363168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.293 [2024-04-24 10:23:52.363180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.293 [2024-04-24 10:23:52.363360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.293 [2024-04-24 10:23:52.363366] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.293 [2024-04-24 10:23:52.363369] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363373] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.293 [2024-04-24 10:23:52.363379] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363383] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363386] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.293 [2024-04-24 10:23:52.363392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.293 [2024-04-24 10:23:52.363405] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.293 [2024-04-24 10:23:52.363517] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.293 [2024-04-24 10:23:52.363523] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.293 [2024-04-24 10:23:52.363526] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363529] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.293 [2024-04-24 10:23:52.363534] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:39.293 [2024-04-24 10:23:52.363538] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:39.293 [2024-04-24 10:23:52.363547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363550] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.293 [2024-04-24 10:23:52.363562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.293 [2024-04-24 10:23:52.363571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.293 [2024-04-24 10:23:52.363657] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.293 [2024-04-24 10:23:52.363662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.293 [2024-04-24 10:23:52.363666] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.293 [2024-04-24 10:23:52.363678] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363682] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363685] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.293 [2024-04-24 10:23:52.363691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.293 [2024-04-24 10:23:52.363700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.293 [2024-04-24 10:23:52.363784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.293 [2024-04-24 10:23:52.363790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.293 [2024-04-24 10:23:52.363792] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363796] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.293 [2024-04-24 10:23:52.363804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363808] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363811] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.293 [2024-04-24 10:23:52.363817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.293 [2024-04-24 10:23:52.363826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.293 [2024-04-24 10:23:52.363910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.293 [2024-04-24 10:23:52.363915] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.293 [2024-04-24 10:23:52.363918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.293 [2024-04-24 10:23:52.363931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363934] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.293 [2024-04-24 10:23:52.363937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.293 [2024-04-24 10:23:52.363943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.293 [2024-04-24 10:23:52.363952] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.364037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.364043] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.364046] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364049] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.364058] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364065] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.364079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.364090] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.364180] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.364185] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.364188] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364192] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.364201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364205] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.364214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.364224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.364309] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.364315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.364318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364321] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.364330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364334] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364337] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.364343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.364352] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.364434] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.364440] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.364443] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364446] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.364455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364458] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.364467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.364476] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.364561] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.364566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.364569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.364581] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364585] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.364594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.364605] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.364694] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.364699] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.364702] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364706] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.364715] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.364727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.364737] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.364821] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.364827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.364830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364833] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.364842] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364846] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.364855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.364864] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.364945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.364951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.364954] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.364966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364970] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.364973] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.364978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.364987] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.365079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.365085] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.365089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.365092] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.365102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.365106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.365109] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.365115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.365127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.369078] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.369086] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.369089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.369092] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.369103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.369107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.369110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3c9e0) 00:28:39.294 [2024-04-24 10:23:52.369116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.294 [2024-04-24 10:23:52.369127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca4b50, cid 3, qid 0 00:28:39.294 [2024-04-24 10:23:52.369305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:39.294 [2024-04-24 10:23:52.369311] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:39.294 [2024-04-24 10:23:52.369314] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:39.294 [2024-04-24 10:23:52.369317] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ca4b50) on tqpair=0x1c3c9e0 00:28:39.294 [2024-04-24 10:23:52.369324] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:39.294 0 Kelvin (-273 Celsius) 00:28:39.294 Available Spare: 0% 00:28:39.294 Available Spare Threshold: 0% 00:28:39.294 Life Percentage Used: 0% 00:28:39.294 Data Units Read: 0 00:28:39.294 Data Units Written: 0 00:28:39.294 Host Read Commands: 0 00:28:39.294 Host Write Commands: 0 00:28:39.294 Controller Busy Time: 0 minutes 00:28:39.294 Power Cycles: 0 00:28:39.294 Power On Hours: 0 hours 00:28:39.294 Unsafe Shutdowns: 0 00:28:39.294 Unrecoverable Media Errors: 0 00:28:39.294 Lifetime Error Log Entries: 0 00:28:39.294 Warning Temperature Time: 0 minutes 00:28:39.294 Critical Temperature Time: 0 minutes 00:28:39.294 00:28:39.294 Number of Queues 00:28:39.294 ================ 00:28:39.294 Number of I/O Submission Queues: 127 00:28:39.294 Number of I/O Completion Queues: 127 00:28:39.294 00:28:39.294 Active Namespaces 00:28:39.294 ================= 00:28:39.294 Namespace ID:1 00:28:39.294 Error Recovery Timeout: Unlimited 00:28:39.294 Command Set Identifier: NVM (00h) 00:28:39.294 Deallocate: Supported 00:28:39.294 Deallocated/Unwritten Error: Not Supported 00:28:39.294 Deallocated Read Value: Unknown 00:28:39.294 Deallocate in Write Zeroes: Not Supported 00:28:39.294 Deallocated Guard Field: 0xFFFF 00:28:39.294 Flush: Supported 00:28:39.294 Reservation: Supported 00:28:39.294 Namespace Sharing Capabilities: Multiple Controllers 00:28:39.294 Size (in LBAs): 131072 (0GiB) 00:28:39.294 Capacity (in LBAs): 131072 (0GiB) 00:28:39.294 Utilization (in LBAs): 131072 (0GiB) 00:28:39.294 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:39.294 EUI64: ABCDEF0123456789 00:28:39.294 UUID: 249e5619-2f98-4bbf-86a4-b1a9a7877ba3 00:28:39.294 Thin Provisioning: Not Supported 00:28:39.294 Per-NS Atomic Units: Yes 00:28:39.294 Atomic Boundary Size (Normal): 0 00:28:39.294 Atomic Boundary Size (PFail): 0 00:28:39.294 Atomic Boundary Offset: 0 00:28:39.294 Maximum Single Source Range Length: 65535 00:28:39.294 Maximum Copy Length: 65535 00:28:39.294 Maximum Source Range Count: 1 00:28:39.294 NGUID/EUI64 Never Reused: No 00:28:39.294 Namespace Write Protected: No 00:28:39.294 Number of LBA Formats: 1 00:28:39.294 Current LBA Format: LBA Format #00 00:28:39.294 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:39.294 00:28:39.294 10:23:52 -- host/identify.sh@51 -- # sync 00:28:39.294 10:23:52 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:39.294 10:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.294 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.294 10:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.294 10:23:52 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:39.294 10:23:52 -- host/identify.sh@56 -- # nvmftestfini 00:28:39.294 10:23:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:39.294 10:23:52 -- nvmf/common.sh@116 -- # sync 00:28:39.294 10:23:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:39.294 10:23:52 -- nvmf/common.sh@119 -- # set +e 00:28:39.294 10:23:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:39.294 10:23:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:39.294 rmmod nvme_tcp 00:28:39.294 rmmod nvme_fabrics 00:28:39.294 rmmod nvme_keyring 00:28:39.294 10:23:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:39.294 10:23:52 -- nvmf/common.sh@123 -- # set -e 00:28:39.294 10:23:52 -- nvmf/common.sh@124 -- # return 0 00:28:39.294 10:23:52 -- nvmf/common.sh@477 -- # '[' -n 434794 ']' 00:28:39.294 10:23:52 -- nvmf/common.sh@478 -- # killprocess 434794 00:28:39.294 10:23:52 -- common/autotest_common.sh@926 -- # '[' -z 434794 ']' 00:28:39.294 10:23:52 -- common/autotest_common.sh@930 -- # kill -0 434794 00:28:39.294 10:23:52 -- common/autotest_common.sh@931 -- # uname 00:28:39.294 10:23:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:39.294 10:23:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 434794 00:28:39.294 10:23:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:39.294 10:23:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:39.294 10:23:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 434794' 00:28:39.294 killing process with pid 434794 00:28:39.294 10:23:52 -- common/autotest_common.sh@945 -- # kill 434794 00:28:39.294 [2024-04-24 10:23:52.485899] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:39.294 10:23:52 -- common/autotest_common.sh@950 -- # wait 434794 00:28:39.552 10:23:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:39.552 10:23:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:39.552 10:23:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:39.552 10:23:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:39.552 10:23:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:39.552 10:23:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.552 10:23:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.552 10:23:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.086 10:23:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:42.086 00:28:42.086 real 0m9.124s 00:28:42.086 user 0m7.109s 00:28:42.086 sys 0m4.442s 00:28:42.086 10:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.086 10:23:54 -- common/autotest_common.sh@10 -- # set +x 00:28:42.086 ************************************ 00:28:42.086 END TEST nvmf_identify 00:28:42.086 ************************************ 00:28:42.086 10:23:54 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:42.086 10:23:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:42.086 10:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:42.087 10:23:54 -- common/autotest_common.sh@10 -- # set +x 00:28:42.087 ************************************ 00:28:42.087 START TEST nvmf_perf 00:28:42.087 ************************************ 00:28:42.087 10:23:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:42.087 * Looking for test storage... 00:28:42.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.087 10:23:54 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.087 10:23:54 -- nvmf/common.sh@7 -- # uname -s 00:28:42.087 10:23:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.087 10:23:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.087 10:23:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.087 10:23:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.087 10:23:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.087 10:23:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.087 10:23:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.087 10:23:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.087 10:23:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.087 10:23:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.087 10:23:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:42.087 10:23:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:42.087 10:23:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.087 10:23:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.087 10:23:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.087 10:23:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.087 10:23:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.087 10:23:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.087 10:23:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.087 10:23:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.087 10:23:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.087 10:23:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.087 10:23:54 -- paths/export.sh@5 -- # export PATH 00:28:42.087 10:23:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.087 10:23:54 -- nvmf/common.sh@46 -- # : 0 00:28:42.087 10:23:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:42.087 10:23:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:42.087 10:23:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:42.087 10:23:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.087 10:23:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.087 10:23:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:42.087 10:23:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:42.087 10:23:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:42.087 10:23:54 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:42.087 10:23:54 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:42.087 10:23:54 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:42.087 10:23:54 -- host/perf.sh@17 -- # nvmftestinit 00:28:42.087 10:23:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:42.087 10:23:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.087 10:23:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:42.087 10:23:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:42.087 10:23:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:42.087 10:23:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.087 10:23:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:42.087 10:23:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.087 10:23:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:42.087 10:23:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:42.087 10:23:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:42.087 10:23:54 -- common/autotest_common.sh@10 -- # set +x 00:28:47.353 10:23:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:47.353 10:23:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:47.353 10:23:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:47.353 10:23:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:47.353 10:23:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:47.353 10:23:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:47.353 10:23:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:47.353 10:23:59 -- nvmf/common.sh@294 -- # net_devs=() 00:28:47.353 10:23:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:47.353 10:23:59 -- nvmf/common.sh@295 -- # e810=() 00:28:47.353 10:23:59 -- nvmf/common.sh@295 -- # local -ga e810 00:28:47.353 10:23:59 -- nvmf/common.sh@296 -- # x722=() 00:28:47.353 10:23:59 -- nvmf/common.sh@296 -- # local -ga x722 00:28:47.353 10:23:59 -- nvmf/common.sh@297 -- # mlx=() 00:28:47.353 10:23:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:47.353 10:23:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.353 10:24:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:47.353 10:24:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:47.353 10:24:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:47.353 10:24:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:47.353 10:24:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:47.353 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:47.353 10:24:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:47.353 10:24:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:47.353 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:47.353 10:24:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:47.353 10:24:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:47.353 10:24:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.353 10:24:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:47.353 10:24:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.353 10:24:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:47.353 Found net devices under 0000:86:00.0: cvl_0_0 00:28:47.353 10:24:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.353 10:24:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:47.353 10:24:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.353 10:24:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:47.353 10:24:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.353 10:24:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:47.353 Found net devices under 0000:86:00.1: cvl_0_1 00:28:47.353 10:24:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.353 10:24:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:47.353 10:24:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:47.353 10:24:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:47.353 10:24:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.353 10:24:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.353 10:24:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.353 10:24:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:47.353 10:24:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.353 10:24:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.353 10:24:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:47.353 10:24:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.353 10:24:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.353 10:24:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:47.353 10:24:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:47.353 10:24:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.353 10:24:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.353 10:24:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.353 10:24:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.353 10:24:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:47.353 10:24:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.353 10:24:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.353 10:24:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.353 10:24:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:47.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:28:47.353 00:28:47.353 --- 10.0.0.2 ping statistics --- 00:28:47.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.353 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:28:47.353 10:24:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:28:47.353 00:28:47.353 --- 10.0.0.1 ping statistics --- 00:28:47.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.353 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:28:47.353 10:24:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.353 10:24:00 -- nvmf/common.sh@410 -- # return 0 00:28:47.353 10:24:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:47.353 10:24:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.353 10:24:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:47.353 10:24:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.353 10:24:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:47.353 10:24:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:47.353 10:24:00 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:47.353 10:24:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:47.353 10:24:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:47.353 10:24:00 -- common/autotest_common.sh@10 -- # set +x 00:28:47.353 10:24:00 -- nvmf/common.sh@469 -- # nvmfpid=438605 00:28:47.353 10:24:00 -- nvmf/common.sh@470 -- # waitforlisten 438605 00:28:47.353 10:24:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:47.353 10:24:00 -- common/autotest_common.sh@819 -- # '[' -z 438605 ']' 00:28:47.354 10:24:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.354 10:24:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:47.354 10:24:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.354 10:24:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:47.354 10:24:00 -- common/autotest_common.sh@10 -- # set +x 00:28:47.354 [2024-04-24 10:24:00.351628] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:47.354 [2024-04-24 10:24:00.351672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.354 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.354 [2024-04-24 10:24:00.410180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.354 [2024-04-24 10:24:00.489954] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:47.354 [2024-04-24 10:24:00.490061] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.354 [2024-04-24 10:24:00.490073] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.354 [2024-04-24 10:24:00.490081] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.354 [2024-04-24 10:24:00.490119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.354 [2024-04-24 10:24:00.490239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.354 [2024-04-24 10:24:00.490334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.354 [2024-04-24 10:24:00.490335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.918 10:24:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:47.919 10:24:01 -- common/autotest_common.sh@852 -- # return 0 00:28:47.919 10:24:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:47.919 10:24:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:47.919 10:24:01 -- common/autotest_common.sh@10 -- # set +x 00:28:48.176 10:24:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.176 10:24:01 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:48.176 10:24:01 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:51.456 10:24:04 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:51.456 10:24:04 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:51.456 10:24:04 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:28:51.456 10:24:04 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:51.456 10:24:04 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:51.456 10:24:04 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:28:51.456 10:24:04 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:51.456 10:24:04 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:51.456 10:24:04 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:51.712 [2024-04-24 10:24:04.764683] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.712 10:24:04 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.712 10:24:04 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:51.712 10:24:04 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.969 10:24:05 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:51.969 10:24:05 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:52.226 10:24:05 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.483 [2024-04-24 10:24:05.515626] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.483 10:24:05 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:52.483 10:24:05 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:28:52.483 10:24:05 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:52.483 10:24:05 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:52.483 10:24:05 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:53.854 Initializing NVMe Controllers 00:28:53.854 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:28:53.854 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:28:53.854 Initialization complete. Launching workers. 00:28:53.854 ======================================================== 00:28:53.854 Latency(us) 00:28:53.854 Device Information : IOPS MiB/s Average min max 00:28:53.854 PCIE (0000:5e:00.0) NSID 1 from core 0: 99606.85 389.09 320.71 9.59 4911.81 00:28:53.854 ======================================================== 00:28:53.854 Total : 99606.85 389.09 320.71 9.59 4911.81 00:28:53.854 00:28:53.854 10:24:06 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.854 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.228 Initializing NVMe Controllers 00:28:55.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:55.228 Initialization complete. Launching workers. 00:28:55.228 ======================================================== 00:28:55.228 Latency(us) 00:28:55.228 Device Information : IOPS MiB/s Average min max 00:28:55.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.97 0.24 16523.75 197.77 45205.84 00:28:55.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 84.95 0.33 12240.07 7962.13 47885.51 00:28:55.228 ======================================================== 00:28:55.228 Total : 146.92 0.57 14046.80 197.77 47885.51 00:28:55.228 00:28:55.228 10:24:08 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.228 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.602 Initializing NVMe Controllers 00:28:56.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:56.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:56.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:56.602 Initialization complete. Launching workers. 00:28:56.602 ======================================================== 00:28:56.602 Latency(us) 00:28:56.602 Device Information : IOPS MiB/s Average min max 00:28:56.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10290.99 40.20 3111.64 560.30 9333.49 00:28:56.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3966.00 15.49 8128.84 4339.02 18519.65 00:28:56.602 ======================================================== 00:28:56.602 Total : 14256.99 55.69 4507.32 560.30 18519.65 00:28:56.602 00:28:56.603 10:24:09 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:56.603 10:24:09 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:56.603 10:24:09 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.603 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.131 Initializing NVMe Controllers 00:28:59.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.131 Controller IO queue size 128, less than required. 00:28:59.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.131 Controller IO queue size 128, less than required. 00:28:59.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:59.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:59.131 Initialization complete. Launching workers. 00:28:59.131 ======================================================== 00:28:59.131 Latency(us) 00:28:59.131 Device Information : IOPS MiB/s Average min max 00:28:59.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1029.99 257.50 127635.14 58787.53 182866.39 00:28:59.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.49 153.62 220370.49 69001.63 318437.90 00:28:59.131 ======================================================== 00:28:59.131 Total : 1644.48 411.12 162287.54 58787.53 318437.90 00:28:59.131 00:28:59.131 10:24:11 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:59.131 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.131 No valid NVMe controllers or AIO or URING devices found 00:28:59.131 Initializing NVMe Controllers 00:28:59.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.131 Controller IO queue size 128, less than required. 00:28:59.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.131 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:59.131 Controller IO queue size 128, less than required. 00:28:59.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.131 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:59.131 WARNING: Some requested NVMe devices were skipped 00:28:59.131 10:24:12 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:59.131 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.660 Initializing NVMe Controllers 00:29:01.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.660 Controller IO queue size 128, less than required. 00:29:01.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.660 Controller IO queue size 128, less than required. 00:29:01.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:01.660 Initialization complete. Launching workers. 00:29:01.660 00:29:01.660 ==================== 00:29:01.660 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:01.660 TCP transport: 00:29:01.660 polls: 40532 00:29:01.660 idle_polls: 12711 00:29:01.660 sock_completions: 27821 00:29:01.660 nvme_completions: 3880 00:29:01.660 submitted_requests: 5932 00:29:01.660 queued_requests: 1 00:29:01.660 00:29:01.660 ==================== 00:29:01.660 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:01.660 TCP transport: 00:29:01.660 polls: 40818 00:29:01.660 idle_polls: 13085 00:29:01.660 sock_completions: 27733 00:29:01.660 nvme_completions: 3852 00:29:01.660 submitted_requests: 5944 00:29:01.660 queued_requests: 1 00:29:01.660 ======================================================== 00:29:01.660 Latency(us) 00:29:01.660 Device Information : IOPS MiB/s Average min max 00:29:01.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1033.50 258.37 128727.15 65838.81 176873.70 00:29:01.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1026.50 256.62 129024.64 51108.54 209778.88 00:29:01.660 ======================================================== 00:29:01.660 Total : 2059.99 515.00 128875.39 51108.54 209778.88 00:29:01.660 00:29:01.660 10:24:14 -- host/perf.sh@66 -- # sync 00:29:01.660 10:24:14 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:01.918 10:24:15 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:01.918 10:24:15 -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:29:01.918 10:24:15 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:05.202 10:24:18 -- host/perf.sh@72 -- # ls_guid=8cdf535e-bd91-4b04-b888-b00b85aa94c3 00:29:05.202 10:24:18 -- host/perf.sh@73 -- # get_lvs_free_mb 8cdf535e-bd91-4b04-b888-b00b85aa94c3 00:29:05.202 10:24:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=8cdf535e-bd91-4b04-b888-b00b85aa94c3 00:29:05.202 10:24:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:05.202 10:24:18 -- common/autotest_common.sh@1345 -- # local fc 00:29:05.202 10:24:18 -- common/autotest_common.sh@1346 -- # local cs 00:29:05.202 10:24:18 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:05.202 10:24:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:05.202 { 00:29:05.202 "uuid": "8cdf535e-bd91-4b04-b888-b00b85aa94c3", 00:29:05.202 "name": "lvs_0", 00:29:05.202 "base_bdev": "Nvme0n1", 00:29:05.202 "total_data_clusters": 238234, 00:29:05.202 "free_clusters": 238234, 00:29:05.202 "block_size": 512, 00:29:05.202 "cluster_size": 4194304 00:29:05.202 } 00:29:05.202 ]' 00:29:05.202 10:24:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="8cdf535e-bd91-4b04-b888-b00b85aa94c3") .free_clusters' 00:29:05.202 10:24:18 -- common/autotest_common.sh@1348 -- # fc=238234 00:29:05.202 10:24:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="8cdf535e-bd91-4b04-b888-b00b85aa94c3") .cluster_size' 00:29:05.461 10:24:18 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:05.461 10:24:18 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:29:05.461 10:24:18 -- common/autotest_common.sh@1353 -- # echo 952936 00:29:05.461 952936 00:29:05.461 10:24:18 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:05.461 10:24:18 -- host/perf.sh@78 -- # free_mb=20480 00:29:05.461 10:24:18 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8cdf535e-bd91-4b04-b888-b00b85aa94c3 lbd_0 20480 00:29:06.027 10:24:19 -- host/perf.sh@80 -- # lb_guid=0386abe7-483f-40d4-aa52-d3f880c3b033 00:29:06.027 10:24:19 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 0386abe7-483f-40d4-aa52-d3f880c3b033 lvs_n_0 00:29:06.593 10:24:19 -- host/perf.sh@83 -- # ls_nested_guid=a90da575-7dd9-4244-9e24-a9b80f27a4e9 00:29:06.593 10:24:19 -- host/perf.sh@84 -- # get_lvs_free_mb a90da575-7dd9-4244-9e24-a9b80f27a4e9 00:29:06.593 10:24:19 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a90da575-7dd9-4244-9e24-a9b80f27a4e9 00:29:06.593 10:24:19 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:06.593 10:24:19 -- common/autotest_common.sh@1345 -- # local fc 00:29:06.593 10:24:19 -- common/autotest_common.sh@1346 -- # local cs 00:29:06.593 10:24:19 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:06.593 10:24:19 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:06.593 { 00:29:06.593 "uuid": "8cdf535e-bd91-4b04-b888-b00b85aa94c3", 00:29:06.593 "name": "lvs_0", 00:29:06.593 "base_bdev": "Nvme0n1", 00:29:06.593 "total_data_clusters": 238234, 00:29:06.593 "free_clusters": 233114, 00:29:06.593 "block_size": 512, 00:29:06.593 "cluster_size": 4194304 00:29:06.593 }, 00:29:06.593 { 00:29:06.594 "uuid": "a90da575-7dd9-4244-9e24-a9b80f27a4e9", 00:29:06.594 "name": "lvs_n_0", 00:29:06.594 "base_bdev": "0386abe7-483f-40d4-aa52-d3f880c3b033", 00:29:06.594 "total_data_clusters": 5114, 00:29:06.594 "free_clusters": 5114, 00:29:06.594 "block_size": 512, 00:29:06.594 "cluster_size": 4194304 00:29:06.594 } 00:29:06.594 ]' 00:29:06.594 10:24:19 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a90da575-7dd9-4244-9e24-a9b80f27a4e9") .free_clusters' 00:29:06.853 10:24:19 -- common/autotest_common.sh@1348 -- # fc=5114 00:29:06.853 10:24:19 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a90da575-7dd9-4244-9e24-a9b80f27a4e9") .cluster_size' 00:29:06.853 10:24:19 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:06.853 10:24:19 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:29:06.853 10:24:19 -- common/autotest_common.sh@1353 -- # echo 20456 00:29:06.853 20456 00:29:06.853 10:24:19 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:06.853 10:24:19 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a90da575-7dd9-4244-9e24-a9b80f27a4e9 lbd_nest_0 20456 00:29:07.128 10:24:20 -- host/perf.sh@88 -- # lb_nested_guid=24a04a7f-25f2-454a-92f5-f61104d24d1b 00:29:07.128 10:24:20 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.128 10:24:20 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:07.128 10:24:20 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 24a04a7f-25f2-454a-92f5-f61104d24d1b 00:29:07.403 10:24:20 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.403 10:24:20 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:07.403 10:24:20 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:07.403 10:24:20 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:07.403 10:24:20 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:07.403 10:24:20 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.666 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.887 Initializing NVMe Controllers 00:29:19.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.887 Initialization complete. Launching workers. 00:29:19.887 ======================================================== 00:29:19.887 Latency(us) 00:29:19.887 Device Information : IOPS MiB/s Average min max 00:29:19.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.20 0.02 21676.55 155.41 45971.74 00:29:19.887 ======================================================== 00:29:19.887 Total : 46.20 0.02 21676.55 155.41 45971.74 00:29:19.887 00:29:19.887 10:24:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:19.887 10:24:31 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.887 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.844 Initializing NVMe Controllers 00:29:29.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:29.844 Initialization complete. Launching workers. 00:29:29.844 ======================================================== 00:29:29.844 Latency(us) 00:29:29.844 Device Information : IOPS MiB/s Average min max 00:29:29.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.20 9.28 13484.20 5985.59 47885.04 00:29:29.844 ======================================================== 00:29:29.844 Total : 74.20 9.28 13484.20 5985.59 47885.04 00:29:29.844 00:29:29.844 10:24:41 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:29.844 10:24:41 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:29.844 10:24:41 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.844 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.808 Initializing NVMe Controllers 00:29:39.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.808 Initialization complete. Launching workers. 00:29:39.808 ======================================================== 00:29:39.808 Latency(us) 00:29:39.808 Device Information : IOPS MiB/s Average min max 00:29:39.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8857.70 4.33 3613.20 263.59 8633.91 00:29:39.808 ======================================================== 00:29:39.808 Total : 8857.70 4.33 3613.20 263.59 8633.91 00:29:39.808 00:29:39.808 10:24:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:39.808 10:24:51 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.808 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.775 Initializing NVMe Controllers 00:29:49.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:49.775 Initialization complete. Launching workers. 00:29:49.775 ======================================================== 00:29:49.775 Latency(us) 00:29:49.775 Device Information : IOPS MiB/s Average min max 00:29:49.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1893.09 236.64 16909.60 1224.20 39009.78 00:29:49.775 ======================================================== 00:29:49.775 Total : 1893.09 236.64 16909.60 1224.20 39009.78 00:29:49.775 00:29:49.775 10:25:01 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:49.775 10:25:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:49.775 10:25:01 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:49.775 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.739 Initializing NVMe Controllers 00:29:59.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.739 Controller IO queue size 128, less than required. 00:29:59.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.739 Initialization complete. Launching workers. 00:29:59.739 ======================================================== 00:29:59.739 Latency(us) 00:29:59.739 Device Information : IOPS MiB/s Average min max 00:29:59.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15616.37 7.63 8196.52 1573.08 18524.24 00:29:59.739 ======================================================== 00:29:59.739 Total : 15616.37 7.63 8196.52 1573.08 18524.24 00:29:59.739 00:29:59.739 10:25:12 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:59.739 10:25:12 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.739 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.709 Initializing NVMe Controllers 00:30:09.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.709 Controller IO queue size 128, less than required. 00:30:09.709 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:09.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:09.710 Initialization complete. Launching workers. 00:30:09.710 ======================================================== 00:30:09.710 Latency(us) 00:30:09.710 Device Information : IOPS MiB/s Average min max 00:30:09.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1230.72 153.84 104409.59 23357.84 190619.90 00:30:09.710 ======================================================== 00:30:09.710 Total : 1230.72 153.84 104409.59 23357.84 190619.90 00:30:09.710 00:30:09.710 10:25:22 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.710 10:25:22 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 24a04a7f-25f2-454a-92f5-f61104d24d1b 00:30:10.275 10:25:23 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:10.534 10:25:23 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0386abe7-483f-40d4-aa52-d3f880c3b033 00:30:10.792 10:25:23 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:11.050 10:25:24 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:11.050 10:25:24 -- host/perf.sh@114 -- # nvmftestfini 00:30:11.050 10:25:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:11.050 10:25:24 -- nvmf/common.sh@116 -- # sync 00:30:11.050 10:25:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:11.050 10:25:24 -- nvmf/common.sh@119 -- # set +e 00:30:11.050 10:25:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:11.050 10:25:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:11.050 rmmod nvme_tcp 00:30:11.050 rmmod nvme_fabrics 00:30:11.050 rmmod nvme_keyring 00:30:11.050 10:25:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:11.050 10:25:24 -- nvmf/common.sh@123 -- # set -e 00:30:11.050 10:25:24 -- nvmf/common.sh@124 -- # return 0 00:30:11.050 10:25:24 -- nvmf/common.sh@477 -- # '[' -n 438605 ']' 00:30:11.050 10:25:24 -- nvmf/common.sh@478 -- # killprocess 438605 00:30:11.050 10:25:24 -- common/autotest_common.sh@926 -- # '[' -z 438605 ']' 00:30:11.050 10:25:24 -- common/autotest_common.sh@930 -- # kill -0 438605 00:30:11.050 10:25:24 -- common/autotest_common.sh@931 -- # uname 00:30:11.050 10:25:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:11.050 10:25:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 438605 00:30:11.050 10:25:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:11.050 10:25:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:11.050 10:25:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 438605' 00:30:11.050 killing process with pid 438605 00:30:11.050 10:25:24 -- common/autotest_common.sh@945 -- # kill 438605 00:30:11.050 10:25:24 -- common/autotest_common.sh@950 -- # wait 438605 00:30:12.951 10:25:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:12.951 10:25:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:12.951 10:25:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:12.951 10:25:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.951 10:25:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:12.951 10:25:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.951 10:25:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.951 10:25:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.854 10:25:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:14.854 00:30:14.854 real 1m32.959s 00:30:14.854 user 5m35.706s 00:30:14.854 sys 0m14.005s 00:30:14.854 10:25:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.854 10:25:27 -- common/autotest_common.sh@10 -- # set +x 00:30:14.854 ************************************ 00:30:14.854 END TEST nvmf_perf 00:30:14.854 ************************************ 00:30:14.854 10:25:27 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:14.854 10:25:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:14.854 10:25:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:14.854 10:25:27 -- common/autotest_common.sh@10 -- # set +x 00:30:14.854 ************************************ 00:30:14.854 START TEST nvmf_fio_host 00:30:14.854 ************************************ 00:30:14.854 10:25:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:14.854 * Looking for test storage... 00:30:14.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:14.854 10:25:27 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.854 10:25:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.854 10:25:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.854 10:25:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.854 10:25:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 10:25:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 10:25:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 10:25:27 -- paths/export.sh@5 -- # export PATH 00:30:14.854 10:25:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 10:25:27 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.854 10:25:27 -- nvmf/common.sh@7 -- # uname -s 00:30:14.854 10:25:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.854 10:25:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.854 10:25:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.854 10:25:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.854 10:25:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.854 10:25:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.854 10:25:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.854 10:25:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.854 10:25:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.854 10:25:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.854 10:25:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:14.854 10:25:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:14.854 10:25:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.854 10:25:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.854 10:25:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.854 10:25:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.854 10:25:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.854 10:25:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.854 10:25:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.854 10:25:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 10:25:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 10:25:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 10:25:27 -- paths/export.sh@5 -- # export PATH 00:30:14.854 10:25:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.854 10:25:27 -- nvmf/common.sh@46 -- # : 0 00:30:14.854 10:25:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:14.854 10:25:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:14.854 10:25:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:14.854 10:25:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.854 10:25:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.854 10:25:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:14.854 10:25:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:14.854 10:25:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:14.854 10:25:27 -- host/fio.sh@12 -- # nvmftestinit 00:30:14.854 10:25:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:14.854 10:25:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.854 10:25:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:14.854 10:25:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:14.854 10:25:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:14.854 10:25:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.854 10:25:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:14.854 10:25:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.854 10:25:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:14.854 10:25:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:14.854 10:25:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:14.854 10:25:27 -- common/autotest_common.sh@10 -- # set +x 00:30:20.124 10:25:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:20.124 10:25:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:20.124 10:25:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:20.124 10:25:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:20.124 10:25:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:20.124 10:25:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:20.124 10:25:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:20.124 10:25:32 -- nvmf/common.sh@294 -- # net_devs=() 00:30:20.124 10:25:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:20.124 10:25:32 -- nvmf/common.sh@295 -- # e810=() 00:30:20.124 10:25:32 -- nvmf/common.sh@295 -- # local -ga e810 00:30:20.124 10:25:32 -- nvmf/common.sh@296 -- # x722=() 00:30:20.124 10:25:32 -- nvmf/common.sh@296 -- # local -ga x722 00:30:20.124 10:25:32 -- nvmf/common.sh@297 -- # mlx=() 00:30:20.124 10:25:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:20.124 10:25:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.124 10:25:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:20.124 10:25:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:20.124 10:25:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:20.124 10:25:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:20.124 10:25:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:20.124 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:20.124 10:25:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:20.124 10:25:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:20.124 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:20.124 10:25:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:20.124 10:25:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:20.124 10:25:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.124 10:25:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:20.124 10:25:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.124 10:25:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:20.124 Found net devices under 0000:86:00.0: cvl_0_0 00:30:20.124 10:25:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.124 10:25:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:20.124 10:25:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.124 10:25:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:20.124 10:25:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.124 10:25:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:20.124 Found net devices under 0000:86:00.1: cvl_0_1 00:30:20.124 10:25:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.124 10:25:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:20.124 10:25:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:20.124 10:25:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:20.124 10:25:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:20.124 10:25:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.124 10:25:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.124 10:25:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.124 10:25:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:20.124 10:25:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.124 10:25:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.124 10:25:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:20.125 10:25:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.125 10:25:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.125 10:25:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:20.125 10:25:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:20.125 10:25:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.125 10:25:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.125 10:25:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.125 10:25:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.125 10:25:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:20.125 10:25:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.125 10:25:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.125 10:25:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.125 10:25:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:20.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:30:20.125 00:30:20.125 --- 10.0.0.2 ping statistics --- 00:30:20.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.125 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:20.125 10:25:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:30:20.125 00:30:20.125 --- 10.0.0.1 ping statistics --- 00:30:20.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.125 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:30:20.125 10:25:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.125 10:25:33 -- nvmf/common.sh@410 -- # return 0 00:30:20.125 10:25:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:20.125 10:25:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.125 10:25:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:20.125 10:25:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:20.125 10:25:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.125 10:25:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:20.125 10:25:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:20.125 10:25:33 -- host/fio.sh@14 -- # [[ y != y ]] 00:30:20.125 10:25:33 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:30:20.125 10:25:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:20.125 10:25:33 -- common/autotest_common.sh@10 -- # set +x 00:30:20.125 10:25:33 -- host/fio.sh@22 -- # nvmfpid=456436 00:30:20.125 10:25:33 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:20.125 10:25:33 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:20.125 10:25:33 -- host/fio.sh@26 -- # waitforlisten 456436 00:30:20.125 10:25:33 -- common/autotest_common.sh@819 -- # '[' -z 456436 ']' 00:30:20.125 10:25:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.125 10:25:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:20.125 10:25:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.125 10:25:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:20.125 10:25:33 -- common/autotest_common.sh@10 -- # set +x 00:30:20.125 [2024-04-24 10:25:33.283255] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:30:20.125 [2024-04-24 10:25:33.283299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.125 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.125 [2024-04-24 10:25:33.341495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:20.384 [2024-04-24 10:25:33.421716] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:20.384 [2024-04-24 10:25:33.421822] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.384 [2024-04-24 10:25:33.421830] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.384 [2024-04-24 10:25:33.421837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.384 [2024-04-24 10:25:33.421880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.384 [2024-04-24 10:25:33.421984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:20.384 [2024-04-24 10:25:33.422067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:20.384 [2024-04-24 10:25:33.422069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.951 10:25:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:20.951 10:25:34 -- common/autotest_common.sh@852 -- # return 0 00:30:20.951 10:25:34 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:20.951 10:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.951 10:25:34 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 [2024-04-24 10:25:34.102207] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.951 10:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.951 10:25:34 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:30:20.951 10:25:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:20.951 10:25:34 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 10:25:34 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:20.951 10:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.951 10:25:34 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 Malloc1 00:30:20.951 10:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.951 10:25:34 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:20.951 10:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.951 10:25:34 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 10:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.951 10:25:34 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:20.951 10:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.951 10:25:34 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 10:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.951 10:25:34 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.951 10:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.951 10:25:34 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 [2024-04-24 10:25:34.186160] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.951 10:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.951 10:25:34 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:20.951 10:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.951 10:25:34 -- common/autotest_common.sh@10 -- # set +x 00:30:20.951 10:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.951 10:25:34 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:20.951 10:25:34 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:20.951 10:25:34 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:20.951 10:25:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:20.951 10:25:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:20.951 10:25:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:20.951 10:25:34 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.951 10:25:34 -- common/autotest_common.sh@1320 -- # shift 00:30:20.951 10:25:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:20.951 10:25:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.951 10:25:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.951 10:25:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:20.951 10:25:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:20.951 10:25:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:20.951 10:25:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:20.951 10:25:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.951 10:25:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.951 10:25:34 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:21.208 10:25:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:21.208 10:25:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:21.208 10:25:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:21.208 10:25:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:21.208 10:25:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:21.466 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:21.466 fio-3.35 00:30:21.466 Starting 1 thread 00:30:21.466 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.054 00:30:24.054 test: (groupid=0, jobs=1): err= 0: pid=456813: Wed Apr 24 10:25:36 2024 00:30:24.054 read: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(95.8MiB/2004msec) 00:30:24.054 slat (nsec): min=1570, max=241597, avg=1745.36, stdev=2221.04 00:30:24.054 clat (usec): min=3852, max=9819, avg=5811.76, stdev=416.17 00:30:24.054 lat (usec): min=3883, max=9821, avg=5813.50, stdev=416.18 00:30:24.054 clat percentiles (usec): 00:30:24.054 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:30:24.054 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5800], 60.00th=[ 5932], 00:30:24.054 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6456], 00:30:24.054 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 8356], 99.95th=[ 9372], 00:30:24.054 | 99.99th=[ 9765] 00:30:24.054 bw ( KiB/s): min=48080, max=49408, per=99.91%, avg=48932.00, stdev=620.28, samples=4 00:30:24.054 iops : min=12020, max=12352, avg=12233.00, stdev=155.07, samples=4 00:30:24.054 write: IOPS=12.2k, BW=47.7MiB/s (50.0MB/s)(95.5MiB/2004msec); 0 zone resets 00:30:24.054 slat (nsec): min=1630, max=229312, avg=1856.90, stdev=1649.32 00:30:24.054 clat (usec): min=2458, max=8599, avg=4630.34, stdev=353.71 00:30:24.054 lat (usec): min=2474, max=8601, avg=4632.20, stdev=353.71 00:30:24.054 clat percentiles (usec): 00:30:24.054 | 1.00th=[ 3752], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4359], 00:30:24.054 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:30:24.054 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5145], 00:30:24.054 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 6718], 99.95th=[ 7111], 00:30:24.054 | 99.99th=[ 8455] 00:30:24.054 bw ( KiB/s): min=48448, max=49408, per=99.97%, avg=48790.00, stdev=438.24, samples=4 00:30:24.054 iops : min=12112, max=12352, avg=12197.50, stdev=109.56, samples=4 00:30:24.054 lat (msec) : 4=1.75%, 10=98.25% 00:30:24.054 cpu : usr=69.75%, sys=26.16%, ctx=115, majf=0, minf=5 00:30:24.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:24.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:24.054 issued rwts: total=24536,24452,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:24.054 00:30:24.054 Run status group 0 (all jobs): 00:30:24.054 READ: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=95.8MiB (100MB), run=2004-2004msec 00:30:24.054 WRITE: bw=47.7MiB/s (50.0MB/s), 47.7MiB/s-47.7MiB/s (50.0MB/s-50.0MB/s), io=95.5MiB (100MB), run=2004-2004msec 00:30:24.054 10:25:36 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:24.054 10:25:36 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:24.054 10:25:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:24.054 10:25:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:24.054 10:25:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:24.054 10:25:36 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:24.054 10:25:36 -- common/autotest_common.sh@1320 -- # shift 00:30:24.054 10:25:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:24.054 10:25:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.054 10:25:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:24.054 10:25:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:24.054 10:25:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:24.054 10:25:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:24.054 10:25:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:24.054 10:25:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.054 10:25:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:24.054 10:25:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:24.054 10:25:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:24.054 10:25:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:24.054 10:25:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:24.054 10:25:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:24.054 10:25:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:24.054 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:24.054 fio-3.35 00:30:24.054 Starting 1 thread 00:30:24.054 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.583 00:30:26.583 test: (groupid=0, jobs=1): err= 0: pid=457389: Wed Apr 24 10:25:39 2024 00:30:26.583 read: IOPS=10.5k, BW=164MiB/s (172MB/s)(330MiB/2007msec) 00:30:26.583 slat (nsec): min=2608, max=87758, avg=2913.06, stdev=1520.84 00:30:26.583 clat (usec): min=1047, max=15221, avg=7252.11, stdev=1892.92 00:30:26.583 lat (usec): min=1050, max=15223, avg=7255.02, stdev=1893.25 00:30:26.583 clat percentiles (usec): 00:30:26.583 | 1.00th=[ 3752], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 5538], 00:30:26.583 | 30.00th=[ 6128], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7635], 00:30:26.583 | 70.00th=[ 8094], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[10683], 00:30:26.583 | 99.00th=[12256], 99.50th=[12911], 99.90th=[13829], 99.95th=[14222], 00:30:26.583 | 99.99th=[14353] 00:30:26.583 bw ( KiB/s): min=80608, max=91848, per=50.68%, avg=85226.00, stdev=4938.18, samples=4 00:30:26.583 iops : min= 5038, max= 5740, avg=5326.50, stdev=308.41, samples=4 00:30:26.583 write: IOPS=6253, BW=97.7MiB/s (102MB/s)(174MiB/1779msec); 0 zone resets 00:30:26.583 slat (usec): min=30, max=423, avg=32.36, stdev= 8.28 00:30:26.583 clat (usec): min=2165, max=15703, avg=8411.15, stdev=1489.06 00:30:26.583 lat (usec): min=2196, max=15819, avg=8443.51, stdev=1491.52 00:30:26.583 clat percentiles (usec): 00:30:26.583 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7111], 00:30:26.583 | 30.00th=[ 7504], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8717], 00:30:26.583 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[10945], 00:30:26.583 | 99.00th=[12518], 99.50th=[13304], 99.90th=[15139], 99.95th=[15401], 00:30:26.583 | 99.99th=[15664] 00:30:26.583 bw ( KiB/s): min=83200, max=95169, per=88.42%, avg=88472.25, stdev=5121.06, samples=4 00:30:26.583 iops : min= 5200, max= 5948, avg=5529.50, stdev=320.04, samples=4 00:30:26.583 lat (msec) : 2=0.04%, 4=1.28%, 10=88.10%, 20=10.58% 00:30:26.583 cpu : usr=82.75%, sys=14.56%, ctx=90, majf=0, minf=2 00:30:26.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:30:26.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:26.583 issued rwts: total=21094,11125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:26.583 00:30:26.583 Run status group 0 (all jobs): 00:30:26.583 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=330MiB (346MB), run=2007-2007msec 00:30:26.583 WRITE: bw=97.7MiB/s (102MB/s), 97.7MiB/s-97.7MiB/s (102MB/s-102MB/s), io=174MiB (182MB), run=1779-1779msec 00:30:26.583 10:25:39 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.583 10:25:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:26.583 10:25:39 -- common/autotest_common.sh@10 -- # set +x 00:30:26.583 10:25:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:26.583 10:25:39 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:30:26.583 10:25:39 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:30:26.583 10:25:39 -- host/fio.sh@49 -- # get_nvme_bdfs 00:30:26.583 10:25:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:26.583 10:25:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:26.583 10:25:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:26.583 10:25:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:26.583 10:25:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:26.583 10:25:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:26.583 10:25:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:30:26.583 10:25:39 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:30:26.583 10:25:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:26.583 10:25:39 -- common/autotest_common.sh@10 -- # set +x 00:30:29.865 Nvme0n1 00:30:29.865 10:25:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:29.865 10:25:42 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:29.865 10:25:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:29.865 10:25:42 -- common/autotest_common.sh@10 -- # set +x 00:30:32.396 10:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.396 10:25:45 -- host/fio.sh@51 -- # ls_guid=60e6df3f-7172-474c-91fb-27f6f933316c 00:30:32.396 10:25:45 -- host/fio.sh@52 -- # get_lvs_free_mb 60e6df3f-7172-474c-91fb-27f6f933316c 00:30:32.396 10:25:45 -- common/autotest_common.sh@1343 -- # local lvs_uuid=60e6df3f-7172-474c-91fb-27f6f933316c 00:30:32.396 10:25:45 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:32.396 10:25:45 -- common/autotest_common.sh@1345 -- # local fc 00:30:32.396 10:25:45 -- common/autotest_common.sh@1346 -- # local cs 00:30:32.396 10:25:45 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:32.396 10:25:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.396 10:25:45 -- common/autotest_common.sh@10 -- # set +x 00:30:32.396 10:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.396 10:25:45 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:32.396 { 00:30:32.396 "uuid": "60e6df3f-7172-474c-91fb-27f6f933316c", 00:30:32.396 "name": "lvs_0", 00:30:32.396 "base_bdev": "Nvme0n1", 00:30:32.396 "total_data_clusters": 930, 00:30:32.396 "free_clusters": 930, 00:30:32.396 "block_size": 512, 00:30:32.396 "cluster_size": 1073741824 00:30:32.396 } 00:30:32.396 ]' 00:30:32.396 10:25:45 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="60e6df3f-7172-474c-91fb-27f6f933316c") .free_clusters' 00:30:32.396 10:25:45 -- common/autotest_common.sh@1348 -- # fc=930 00:30:32.396 10:25:45 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="60e6df3f-7172-474c-91fb-27f6f933316c") .cluster_size' 00:30:32.396 10:25:45 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:30:32.396 10:25:45 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:30:32.396 10:25:45 -- common/autotest_common.sh@1353 -- # echo 952320 00:30:32.396 952320 00:30:32.396 10:25:45 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:32.396 10:25:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.396 10:25:45 -- common/autotest_common.sh@10 -- # set +x 00:30:32.396 c239ba21-ab66-44f0-a23e-c1e8a0075e04 00:30:32.396 10:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.396 10:25:45 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:32.396 10:25:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.396 10:25:45 -- common/autotest_common.sh@10 -- # set +x 00:30:32.396 10:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.396 10:25:45 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:32.396 10:25:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.396 10:25:45 -- common/autotest_common.sh@10 -- # set +x 00:30:32.396 10:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.396 10:25:45 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:32.396 10:25:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.396 10:25:45 -- common/autotest_common.sh@10 -- # set +x 00:30:32.396 10:25:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.396 10:25:45 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:32.396 10:25:45 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:32.396 10:25:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:32.396 10:25:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:32.396 10:25:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:32.396 10:25:45 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:32.396 10:25:45 -- common/autotest_common.sh@1320 -- # shift 00:30:32.396 10:25:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:32.396 10:25:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.396 10:25:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:32.396 10:25:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:32.396 10:25:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:32.396 10:25:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:32.396 10:25:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:32.396 10:25:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.396 10:25:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:32.396 10:25:45 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:32.396 10:25:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:32.396 10:25:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:32.396 10:25:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:32.396 10:25:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:32.396 10:25:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:32.654 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:32.654 fio-3.35 00:30:32.654 Starting 1 thread 00:30:32.654 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.183 00:30:35.183 test: (groupid=0, jobs=1): err= 0: pid=458928: Wed Apr 24 10:25:48 2024 00:30:35.183 read: IOPS=8231, BW=32.2MiB/s (33.7MB/s)(64.5MiB/2007msec) 00:30:35.183 slat (nsec): min=1568, max=294126, avg=1841.68, stdev=2799.33 00:30:35.183 clat (usec): min=650, max=170158, avg=8563.35, stdev=10198.64 00:30:35.183 lat (usec): min=651, max=170174, avg=8565.19, stdev=10198.79 00:30:35.183 clat percentiles (msec): 00:30:35.183 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:30:35.183 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:30:35.183 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:30:35.183 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 169], 99.95th=[ 171], 00:30:35.183 | 99.99th=[ 171] 00:30:35.183 bw ( KiB/s): min=23104, max=36360, per=99.95%, avg=32910.00, stdev=6539.90, samples=4 00:30:35.183 iops : min= 5776, max= 9090, avg=8227.50, stdev=1634.97, samples=4 00:30:35.183 write: IOPS=8240, BW=32.2MiB/s (33.8MB/s)(64.6MiB/2007msec); 0 zone resets 00:30:35.183 slat (nsec): min=1629, max=211757, avg=1928.02, stdev=2089.75 00:30:35.183 clat (usec): min=224, max=168308, avg=6847.42, stdev=9498.99 00:30:35.183 lat (usec): min=226, max=168313, avg=6849.35, stdev=9499.41 00:30:35.183 clat percentiles (msec): 00:30:35.183 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:30:35.183 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:30:35.183 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 8], 00:30:35.183 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 169], 99.95th=[ 169], 00:30:35.183 | 99.99th=[ 169] 00:30:35.183 bw ( KiB/s): min=24104, max=36160, per=99.98%, avg=32954.00, stdev=5903.01, samples=4 00:30:35.183 iops : min= 6026, max= 9040, avg=8238.50, stdev=1475.75, samples=4 00:30:35.183 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:30:35.183 lat (msec) : 2=0.05%, 4=0.26%, 10=99.14%, 20=0.14%, 250=0.39% 00:30:35.183 cpu : usr=69.19%, sys=28.22%, ctx=73, majf=0, minf=5 00:30:35.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:35.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:35.183 issued rwts: total=16521,16538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:35.183 00:30:35.183 Run status group 0 (all jobs): 00:30:35.183 READ: bw=32.2MiB/s (33.7MB/s), 32.2MiB/s-32.2MiB/s (33.7MB/s-33.7MB/s), io=64.5MiB (67.7MB), run=2007-2007msec 00:30:35.183 WRITE: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=64.6MiB (67.7MB), run=2007-2007msec 00:30:35.183 10:25:48 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:35.183 10:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.183 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:30:35.183 10:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.183 10:25:48 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:35.183 10:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.183 10:25:48 -- common/autotest_common.sh@10 -- # set +x 00:30:35.749 10:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.750 10:25:48 -- host/fio.sh@62 -- # ls_nested_guid=e2dc8399-4e0e-4b06-a3b3-87ef011988b8 00:30:35.750 10:25:48 -- host/fio.sh@63 -- # get_lvs_free_mb e2dc8399-4e0e-4b06-a3b3-87ef011988b8 00:30:35.750 10:25:49 -- common/autotest_common.sh@1343 -- # local lvs_uuid=e2dc8399-4e0e-4b06-a3b3-87ef011988b8 00:30:35.750 10:25:49 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:35.750 10:25:49 -- common/autotest_common.sh@1345 -- # local fc 00:30:35.750 10:25:49 -- common/autotest_common.sh@1346 -- # local cs 00:30:35.750 10:25:49 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:35.750 10:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.750 10:25:49 -- common/autotest_common.sh@10 -- # set +x 00:30:35.750 10:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.750 10:25:49 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:35.750 { 00:30:35.750 "uuid": "60e6df3f-7172-474c-91fb-27f6f933316c", 00:30:35.750 "name": "lvs_0", 00:30:35.750 "base_bdev": "Nvme0n1", 00:30:35.750 "total_data_clusters": 930, 00:30:35.750 "free_clusters": 0, 00:30:35.750 "block_size": 512, 00:30:35.750 "cluster_size": 1073741824 00:30:35.750 }, 00:30:35.750 { 00:30:35.750 "uuid": "e2dc8399-4e0e-4b06-a3b3-87ef011988b8", 00:30:35.750 "name": "lvs_n_0", 00:30:35.750 "base_bdev": "c239ba21-ab66-44f0-a23e-c1e8a0075e04", 00:30:35.750 "total_data_clusters": 237847, 00:30:35.750 "free_clusters": 237847, 00:30:35.750 "block_size": 512, 00:30:35.750 "cluster_size": 4194304 00:30:35.750 } 00:30:35.750 ]' 00:30:35.750 10:25:49 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="e2dc8399-4e0e-4b06-a3b3-87ef011988b8") .free_clusters' 00:30:36.008 10:25:49 -- common/autotest_common.sh@1348 -- # fc=237847 00:30:36.008 10:25:49 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="e2dc8399-4e0e-4b06-a3b3-87ef011988b8") .cluster_size' 00:30:36.008 10:25:49 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:36.008 10:25:49 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:30:36.008 10:25:49 -- common/autotest_common.sh@1353 -- # echo 951388 00:30:36.008 951388 00:30:36.008 10:25:49 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:36.008 10:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.008 10:25:49 -- common/autotest_common.sh@10 -- # set +x 00:30:36.265 2c74f8b4-33ac-47a2-bfe2-3d41807611ee 00:30:36.265 10:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.265 10:25:49 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:36.265 10:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.265 10:25:49 -- common/autotest_common.sh@10 -- # set +x 00:30:36.265 10:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.265 10:25:49 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:36.265 10:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.265 10:25:49 -- common/autotest_common.sh@10 -- # set +x 00:30:36.265 10:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.266 10:25:49 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:36.266 10:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:36.266 10:25:49 -- common/autotest_common.sh@10 -- # set +x 00:30:36.266 10:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:36.266 10:25:49 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:36.266 10:25:49 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:36.266 10:25:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:36.266 10:25:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:36.266 10:25:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:36.266 10:25:49 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:36.266 10:25:49 -- common/autotest_common.sh@1320 -- # shift 00:30:36.266 10:25:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:36.266 10:25:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.266 10:25:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:36.266 10:25:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:36.266 10:25:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:36.266 10:25:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:36.266 10:25:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:36.266 10:25:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.266 10:25:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:36.266 10:25:49 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:36.266 10:25:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:36.523 10:25:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:36.523 10:25:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:36.523 10:25:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:36.523 10:25:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:36.523 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:36.523 fio-3.35 00:30:36.523 Starting 1 thread 00:30:36.781 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.308 00:30:39.308 test: (groupid=0, jobs=1): err= 0: pid=459744: Wed Apr 24 10:25:52 2024 00:30:39.308 read: IOPS=7904, BW=30.9MiB/s (32.4MB/s)(61.9MiB/2006msec) 00:30:39.308 slat (nsec): min=1603, max=108214, avg=1710.66, stdev=1109.70 00:30:39.308 clat (usec): min=3243, max=15713, avg=8981.47, stdev=733.31 00:30:39.308 lat (usec): min=3247, max=15715, avg=8983.18, stdev=733.26 00:30:39.308 clat percentiles (usec): 00:30:39.308 | 1.00th=[ 7308], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:30:39.308 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:30:39.308 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:30:39.308 | 99.00th=[10683], 99.50th=[10814], 99.90th=[12649], 99.95th=[13829], 00:30:39.308 | 99.99th=[15664] 00:30:39.308 bw ( KiB/s): min=30232, max=32184, per=99.86%, avg=31572.00, stdev=902.87, samples=4 00:30:39.308 iops : min= 7558, max= 8046, avg=7893.00, stdev=225.72, samples=4 00:30:39.308 write: IOPS=7878, BW=30.8MiB/s (32.3MB/s)(61.7MiB/2006msec); 0 zone resets 00:30:39.308 slat (nsec): min=1663, max=83212, avg=1788.96, stdev=698.66 00:30:39.308 clat (usec): min=1590, max=12787, avg=7118.78, stdev=640.97 00:30:39.308 lat (usec): min=1594, max=12789, avg=7120.57, stdev=640.95 00:30:39.308 clat percentiles (usec): 00:30:39.308 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 6652], 00:30:39.308 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:30:39.308 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8094], 00:30:39.308 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10945], 99.95th=[11863], 00:30:39.308 | 99.99th=[12780] 00:30:39.308 bw ( KiB/s): min=31320, max=31616, per=99.94%, avg=31494.00, stdev=127.23, samples=4 00:30:39.308 iops : min= 7830, max= 7904, avg=7873.50, stdev=31.81, samples=4 00:30:39.308 lat (msec) : 2=0.01%, 4=0.07%, 10=96.49%, 20=3.43% 00:30:39.308 cpu : usr=64.29%, sys=32.07%, ctx=111, majf=0, minf=5 00:30:39.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:39.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:39.308 issued rwts: total=15856,15804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:39.308 00:30:39.308 Run status group 0 (all jobs): 00:30:39.308 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=61.9MiB (64.9MB), run=2006-2006msec 00:30:39.308 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.7MiB (64.7MB), run=2006-2006msec 00:30:39.308 10:25:52 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:39.308 10:25:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:39.308 10:25:52 -- common/autotest_common.sh@10 -- # set +x 00:30:39.308 10:25:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:39.308 10:25:52 -- host/fio.sh@72 -- # sync 00:30:39.308 10:25:52 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:39.308 10:25:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:39.308 10:25:52 -- common/autotest_common.sh@10 -- # set +x 00:30:42.584 10:25:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.584 10:25:55 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:30:42.584 10:25:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.584 10:25:55 -- common/autotest_common.sh@10 -- # set +x 00:30:42.584 10:25:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:42.584 10:25:55 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:30:42.584 10:25:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:42.584 10:25:55 -- common/autotest_common.sh@10 -- # set +x 00:30:45.269 10:25:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.269 10:25:58 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:30:45.269 10:25:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.269 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:30:45.269 10:25:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:45.269 10:25:58 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:30:45.269 10:25:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:45.269 10:25:58 -- common/autotest_common.sh@10 -- # set +x 00:30:47.171 10:25:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.171 10:25:59 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:30:47.171 10:25:59 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:30:47.171 10:25:59 -- host/fio.sh@84 -- # nvmftestfini 00:30:47.171 10:25:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:47.171 10:25:59 -- nvmf/common.sh@116 -- # sync 00:30:47.171 10:25:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:47.171 10:25:59 -- nvmf/common.sh@119 -- # set +e 00:30:47.171 10:25:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:47.171 10:25:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:47.171 rmmod nvme_tcp 00:30:47.171 rmmod nvme_fabrics 00:30:47.171 rmmod nvme_keyring 00:30:47.171 10:26:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:47.171 10:26:00 -- nvmf/common.sh@123 -- # set -e 00:30:47.171 10:26:00 -- nvmf/common.sh@124 -- # return 0 00:30:47.171 10:26:00 -- nvmf/common.sh@477 -- # '[' -n 456436 ']' 00:30:47.171 10:26:00 -- nvmf/common.sh@478 -- # killprocess 456436 00:30:47.171 10:26:00 -- common/autotest_common.sh@926 -- # '[' -z 456436 ']' 00:30:47.171 10:26:00 -- common/autotest_common.sh@930 -- # kill -0 456436 00:30:47.171 10:26:00 -- common/autotest_common.sh@931 -- # uname 00:30:47.171 10:26:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:47.171 10:26:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 456436 00:30:47.171 10:26:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:47.171 10:26:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:47.171 10:26:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 456436' 00:30:47.171 killing process with pid 456436 00:30:47.171 10:26:00 -- common/autotest_common.sh@945 -- # kill 456436 00:30:47.171 10:26:00 -- common/autotest_common.sh@950 -- # wait 456436 00:30:47.171 10:26:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:47.171 10:26:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:47.171 10:26:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:47.171 10:26:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:47.171 10:26:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:47.171 10:26:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.171 10:26:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:47.171 10:26:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.705 10:26:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:49.705 00:30:49.705 real 0m34.555s 00:30:49.705 user 2m15.519s 00:30:49.705 sys 0m7.757s 00:30:49.705 10:26:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.705 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.705 ************************************ 00:30:49.705 END TEST nvmf_fio_host 00:30:49.705 ************************************ 00:30:49.705 10:26:02 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:49.705 10:26:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:49.705 10:26:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:49.706 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.706 ************************************ 00:30:49.706 START TEST nvmf_failover 00:30:49.706 ************************************ 00:30:49.706 10:26:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:49.706 * Looking for test storage... 00:30:49.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:49.706 10:26:02 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.706 10:26:02 -- nvmf/common.sh@7 -- # uname -s 00:30:49.706 10:26:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.706 10:26:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.706 10:26:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.706 10:26:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.706 10:26:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.706 10:26:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.706 10:26:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.706 10:26:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.706 10:26:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.706 10:26:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.706 10:26:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:49.706 10:26:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:49.706 10:26:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.706 10:26:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.706 10:26:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.706 10:26:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.706 10:26:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.706 10:26:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.706 10:26:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.706 10:26:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.706 10:26:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.706 10:26:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.706 10:26:02 -- paths/export.sh@5 -- # export PATH 00:30:49.706 10:26:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.706 10:26:02 -- nvmf/common.sh@46 -- # : 0 00:30:49.706 10:26:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:49.706 10:26:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:49.706 10:26:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:49.706 10:26:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.706 10:26:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.706 10:26:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:49.706 10:26:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:49.706 10:26:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:49.706 10:26:02 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:49.706 10:26:02 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:49.706 10:26:02 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:49.706 10:26:02 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:49.706 10:26:02 -- host/failover.sh@18 -- # nvmftestinit 00:30:49.706 10:26:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:49.706 10:26:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.706 10:26:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:49.706 10:26:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:49.706 10:26:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:49.706 10:26:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.706 10:26:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:49.706 10:26:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.706 10:26:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:49.706 10:26:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:49.706 10:26:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:49.706 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:30:54.976 10:26:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:54.976 10:26:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:54.976 10:26:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:54.976 10:26:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:54.976 10:26:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:54.976 10:26:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:54.976 10:26:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:54.976 10:26:07 -- nvmf/common.sh@294 -- # net_devs=() 00:30:54.976 10:26:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:54.976 10:26:07 -- nvmf/common.sh@295 -- # e810=() 00:30:54.976 10:26:07 -- nvmf/common.sh@295 -- # local -ga e810 00:30:54.976 10:26:07 -- nvmf/common.sh@296 -- # x722=() 00:30:54.976 10:26:07 -- nvmf/common.sh@296 -- # local -ga x722 00:30:54.976 10:26:07 -- nvmf/common.sh@297 -- # mlx=() 00:30:54.976 10:26:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:54.976 10:26:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.976 10:26:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:54.976 10:26:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:54.976 10:26:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:54.976 10:26:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:54.976 10:26:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:54.976 10:26:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:54.976 10:26:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:54.976 10:26:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:54.976 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:54.976 10:26:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:54.977 10:26:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:54.977 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:54.977 10:26:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:54.977 10:26:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:54.977 10:26:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.977 10:26:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:54.977 10:26:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.977 10:26:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:54.977 Found net devices under 0000:86:00.0: cvl_0_0 00:30:54.977 10:26:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.977 10:26:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:54.977 10:26:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.977 10:26:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:54.977 10:26:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.977 10:26:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:54.977 Found net devices under 0000:86:00.1: cvl_0_1 00:30:54.977 10:26:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.977 10:26:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:54.977 10:26:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:54.977 10:26:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:54.977 10:26:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.977 10:26:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.977 10:26:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.977 10:26:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:54.977 10:26:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.977 10:26:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.977 10:26:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:54.977 10:26:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.977 10:26:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.977 10:26:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:54.977 10:26:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:54.977 10:26:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.977 10:26:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.977 10:26:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.977 10:26:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.977 10:26:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:54.977 10:26:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.977 10:26:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.977 10:26:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.977 10:26:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:54.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:30:54.977 00:30:54.977 --- 10.0.0.2 ping statistics --- 00:30:54.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.977 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:30:54.977 10:26:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:30:54.977 00:30:54.977 --- 10.0.0.1 ping statistics --- 00:30:54.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.977 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:30:54.977 10:26:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.977 10:26:07 -- nvmf/common.sh@410 -- # return 0 00:30:54.977 10:26:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:54.977 10:26:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.977 10:26:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:54.977 10:26:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.977 10:26:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:54.977 10:26:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:54.977 10:26:07 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:54.977 10:26:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:54.977 10:26:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:54.977 10:26:07 -- common/autotest_common.sh@10 -- # set +x 00:30:54.977 10:26:07 -- nvmf/common.sh@469 -- # nvmfpid=464711 00:30:54.977 10:26:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:54.977 10:26:07 -- nvmf/common.sh@470 -- # waitforlisten 464711 00:30:54.977 10:26:07 -- common/autotest_common.sh@819 -- # '[' -z 464711 ']' 00:30:54.977 10:26:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.977 10:26:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:54.977 10:26:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.977 10:26:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:54.977 10:26:07 -- common/autotest_common.sh@10 -- # set +x 00:30:54.977 [2024-04-24 10:26:07.956639] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:30:54.977 [2024-04-24 10:26:07.956681] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.977 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.977 [2024-04-24 10:26:08.013715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:54.977 [2024-04-24 10:26:08.091127] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:54.977 [2024-04-24 10:26:08.091234] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.977 [2024-04-24 10:26:08.091242] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.977 [2024-04-24 10:26:08.091248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.977 [2024-04-24 10:26:08.091342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.977 [2024-04-24 10:26:08.091365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.977 [2024-04-24 10:26:08.091366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.546 10:26:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:55.546 10:26:08 -- common/autotest_common.sh@852 -- # return 0 00:30:55.546 10:26:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:55.546 10:26:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:55.546 10:26:08 -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 10:26:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.546 10:26:08 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:55.806 [2024-04-24 10:26:08.939918] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.806 10:26:08 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:56.065 Malloc0 00:30:56.065 10:26:09 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:56.324 10:26:09 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:56.324 10:26:09 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.583 [2024-04-24 10:26:09.700438] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.583 10:26:09 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:56.842 [2024-04-24 10:26:09.880984] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:56.842 10:26:09 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:56.842 [2024-04-24 10:26:10.065658] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:56.842 10:26:10 -- host/failover.sh@31 -- # bdevperf_pid=465169 00:30:56.842 10:26:10 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:56.842 10:26:10 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:56.842 10:26:10 -- host/failover.sh@34 -- # waitforlisten 465169 /var/tmp/bdevperf.sock 00:30:56.842 10:26:10 -- common/autotest_common.sh@819 -- # '[' -z 465169 ']' 00:30:56.842 10:26:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:56.842 10:26:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:56.842 10:26:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:56.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:56.842 10:26:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:56.842 10:26:10 -- common/autotest_common.sh@10 -- # set +x 00:30:57.780 10:26:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:57.780 10:26:10 -- common/autotest_common.sh@852 -- # return 0 00:30:57.780 10:26:10 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:58.038 NVMe0n1 00:30:58.038 10:26:11 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:58.606 00:30:58.606 10:26:11 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:58.606 10:26:11 -- host/failover.sh@39 -- # run_test_pid=465404 00:30:58.606 10:26:11 -- host/failover.sh@41 -- # sleep 1 00:30:59.545 10:26:12 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.545 [2024-04-24 10:26:12.784278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 [2024-04-24 10:26:12.784603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce500 is same with the state(5) to be set 00:30:59.545 10:26:12 -- host/failover.sh@45 -- # sleep 3 00:31:02.851 10:26:15 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.109 00:31:03.109 10:26:16 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:03.369 [2024-04-24 10:26:16.387878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.387994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 [2024-04-24 10:26:16.388179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf3b0 is same with the state(5) to be set 00:31:03.369 10:26:16 -- host/failover.sh@50 -- # sleep 3 00:31:06.656 10:26:19 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.656 [2024-04-24 10:26:19.580037] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.656 10:26:19 -- host/failover.sh@55 -- # sleep 1 00:31:07.654 10:26:20 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:07.654 [2024-04-24 10:26:20.773377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 [2024-04-24 10:26:20.773708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd00a0 is same with the state(5) to be set 00:31:07.654 10:26:20 -- host/failover.sh@59 -- # wait 465404 00:31:14.224 0 00:31:14.224 10:26:26 -- host/failover.sh@61 -- # killprocess 465169 00:31:14.224 10:26:26 -- common/autotest_common.sh@926 -- # '[' -z 465169 ']' 00:31:14.224 10:26:26 -- common/autotest_common.sh@930 -- # kill -0 465169 00:31:14.224 10:26:26 -- common/autotest_common.sh@931 -- # uname 00:31:14.224 10:26:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:14.224 10:26:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 465169 00:31:14.224 10:26:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:14.224 10:26:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:14.224 10:26:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 465169' 00:31:14.224 killing process with pid 465169 00:31:14.224 10:26:26 -- common/autotest_common.sh@945 -- # kill 465169 00:31:14.224 10:26:26 -- common/autotest_common.sh@950 -- # wait 465169 00:31:14.224 10:26:26 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:14.224 [2024-04-24 10:26:10.136214] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:14.225 [2024-04-24 10:26:10.136265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465169 ] 00:31:14.225 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.225 [2024-04-24 10:26:10.190571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.225 [2024-04-24 10:26:10.264530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.225 Running I/O for 15 seconds... 00:31:14.225 [2024-04-24 10:26:12.784882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.784918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.784934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.784942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.784951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.784959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.784967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.784974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.784982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.784989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.784998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.225 [2024-04-24 10:26:12.785407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.225 [2024-04-24 10:26:12.785437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.225 [2024-04-24 10:26:12.785486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.225 [2024-04-24 10:26:12.785494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.785962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.785985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.226 [2024-04-24 10:26:12.785992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.786000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.786007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.786015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.226 [2024-04-24 10:26:12.786021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.226 [2024-04-24 10:26:12.786029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.227 [2024-04-24 10:26:12.786614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.227 [2024-04-24 10:26:12.786624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.227 [2024-04-24 10:26:12.786631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.228 [2024-04-24 10:26:12.786646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.228 [2024-04-24 10:26:12.786675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.228 [2024-04-24 10:26:12.786690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.228 [2024-04-24 10:26:12.786721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:12.786840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491010 is same with the state(5) to be set 00:31:14.228 [2024-04-24 10:26:12.786856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:14.228 [2024-04-24 10:26:12.786862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:14.228 [2024-04-24 10:26:12.786869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16760 len:8 PRP1 0x0 PRP2 0x0 00:31:14.228 [2024-04-24 10:26:12.786875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786916] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2491010 was disconnected and freed. reset controller. 00:31:14.228 [2024-04-24 10:26:12.786929] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:14.228 [2024-04-24 10:26:12.786950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.228 [2024-04-24 10:26:12.786958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.228 [2024-04-24 10:26:12.786973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.228 [2024-04-24 10:26:12.786989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.786996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.228 [2024-04-24 10:26:12.787006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:12.787015] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.228 [2024-04-24 10:26:12.787040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249b010 (9): Bad file descriptor 00:31:14.228 [2024-04-24 10:26:12.789077] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.228 [2024-04-24 10:26:12.937722] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:14.228 [2024-04-24 10:26:16.388330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.228 [2024-04-24 10:26:16.388561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.228 [2024-04-24 10:26:16.388568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.229 [2024-04-24 10:26:16.388825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.229 [2024-04-24 10:26:16.388839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.229 [2024-04-24 10:26:16.388899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.229 [2024-04-24 10:26:16.388973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.229 [2024-04-24 10:26:16.388981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.229 [2024-04-24 10:26:16.388987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.388995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.230 [2024-04-24 10:26:16.389529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.230 [2024-04-24 10:26:16.389583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.230 [2024-04-24 10:26:16.389589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.389950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.389988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.389994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.390105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.390150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.231 [2024-04-24 10:26:16.390164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.231 [2024-04-24 10:26:16.390194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.231 [2024-04-24 10:26:16.390202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:16.390210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:16.390224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:16.390239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:16.390253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:16.390268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a7560 is same with the state(5) to be set 00:31:14.232 [2024-04-24 10:26:16.390286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:14.232 [2024-04-24 10:26:16.390293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:14.232 [2024-04-24 10:26:16.390299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41576 len:8 PRP1 0x0 PRP2 0x0 00:31:14.232 [2024-04-24 10:26:16.390306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390345] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24a7560 was disconnected and freed. reset controller. 00:31:14.232 [2024-04-24 10:26:16.390354] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:14.232 [2024-04-24 10:26:16.390373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.232 [2024-04-24 10:26:16.390380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.232 [2024-04-24 10:26:16.390393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.232 [2024-04-24 10:26:16.390407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.232 [2024-04-24 10:26:16.390420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:16.390427] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.232 [2024-04-24 10:26:16.392194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.232 [2024-04-24 10:26:16.392222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249b010 (9): Bad file descriptor 00:31:14.232 [2024-04-24 10:26:16.512475] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:14.232 [2024-04-24 10:26:20.773869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.773904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.773919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.773927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.773936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.773943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.773951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.773958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.773966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.773977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.773985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.773996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.232 [2024-04-24 10:26:20.774285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.232 [2024-04-24 10:26:20.774301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.232 [2024-04-24 10:26:20.774309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.233 [2024-04-24 10:26:20.774449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.233 [2024-04-24 10:26:20.774509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.233 [2024-04-24 10:26:20.774544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.233 [2024-04-24 10:26:20.774563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.233 [2024-04-24 10:26:20.774785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.233 [2024-04-24 10:26:20.774844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.233 [2024-04-24 10:26:20.774873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.233 [2024-04-24 10:26:20.774902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.233 [2024-04-24 10:26:20.774916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.233 [2024-04-24 10:26:20.774924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.233 [2024-04-24 10:26:20.774931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.774939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.774946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.774954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.774960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.774969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.774975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.774983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.774990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.774998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.234 [2024-04-24 10:26:20.775355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.234 [2024-04-24 10:26:20.775441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.234 [2024-04-24 10:26:20.775448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.235 [2024-04-24 10:26:20.775522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.235 [2024-04-24 10:26:20.775573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.235 [2024-04-24 10:26:20.775588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.235 [2024-04-24 10:26:20.775634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.235 [2024-04-24 10:26:20.775651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.235 [2024-04-24 10:26:20.775665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.235 [2024-04-24 10:26:20.775739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.235 [2024-04-24 10:26:20.775754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.235 [2024-04-24 10:26:20.775859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496de0 is same with the state(5) to be set 00:31:14.235 [2024-04-24 10:26:20.775875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:14.235 [2024-04-24 10:26:20.775882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:14.235 [2024-04-24 10:26:20.775888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29312 len:8 PRP1 0x0 PRP2 0x0 00:31:14.235 [2024-04-24 10:26:20.775894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775936] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2496de0 was disconnected and freed. reset controller. 00:31:14.235 [2024-04-24 10:26:20.775945] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:14.235 [2024-04-24 10:26:20.775965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.235 [2024-04-24 10:26:20.775973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.235 [2024-04-24 10:26:20.775987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.775994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.235 [2024-04-24 10:26:20.776001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.776008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.235 [2024-04-24 10:26:20.776014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.235 [2024-04-24 10:26:20.776021] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.235 [2024-04-24 10:26:20.777872] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.235 [2024-04-24 10:26:20.777895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249b010 (9): Bad file descriptor 00:31:14.235 [2024-04-24 10:26:20.810659] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:14.235 00:31:14.235 Latency(us) 00:31:14.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.235 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:14.235 Verification LBA range: start 0x0 length 0x4000 00:31:14.235 NVMe0n1 : 15.00 16484.57 64.39 1444.21 0.00 7126.35 644.67 14075.99 00:31:14.235 =================================================================================================================== 00:31:14.235 Total : 16484.57 64.39 1444.21 0.00 7126.35 644.67 14075.99 00:31:14.235 Received shutdown signal, test time was about 15.000000 seconds 00:31:14.235 00:31:14.235 Latency(us) 00:31:14.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.235 =================================================================================================================== 00:31:14.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:14.235 10:26:26 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:14.236 10:26:27 -- host/failover.sh@65 -- # count=3 00:31:14.236 10:26:27 -- host/failover.sh@67 -- # (( count != 3 )) 00:31:14.236 10:26:27 -- host/failover.sh@73 -- # bdevperf_pid=467974 00:31:14.236 10:26:27 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:14.236 10:26:27 -- host/failover.sh@75 -- # waitforlisten 467974 /var/tmp/bdevperf.sock 00:31:14.236 10:26:27 -- common/autotest_common.sh@819 -- # '[' -z 467974 ']' 00:31:14.236 10:26:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:14.236 10:26:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:14.236 10:26:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:14.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:14.236 10:26:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:14.236 10:26:27 -- common/autotest_common.sh@10 -- # set +x 00:31:14.802 10:26:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:14.802 10:26:27 -- common/autotest_common.sh@852 -- # return 0 00:31:14.802 10:26:27 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:14.802 [2024-04-24 10:26:27.997747] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:14.802 10:26:28 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:15.060 [2024-04-24 10:26:28.186283] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:15.060 10:26:28 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:15.318 NVMe0n1 00:31:15.318 10:26:28 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:15.576 00:31:15.576 10:26:28 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:15.834 00:31:15.834 10:26:29 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:15.834 10:26:29 -- host/failover.sh@82 -- # grep -q NVMe0 00:31:16.092 10:26:29 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:16.092 10:26:29 -- host/failover.sh@87 -- # sleep 3 00:31:19.374 10:26:32 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:19.374 10:26:32 -- host/failover.sh@88 -- # grep -q NVMe0 00:31:19.374 10:26:32 -- host/failover.sh@90 -- # run_test_pid=468855 00:31:19.374 10:26:32 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:19.374 10:26:32 -- host/failover.sh@92 -- # wait 468855 00:31:20.747 0 00:31:20.747 10:26:33 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:20.747 [2024-04-24 10:26:27.045706] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:20.747 [2024-04-24 10:26:27.045760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467974 ] 00:31:20.747 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.747 [2024-04-24 10:26:27.100215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.747 [2024-04-24 10:26:27.169353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.747 [2024-04-24 10:26:29.331559] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:20.747 [2024-04-24 10:26:29.331610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.747 [2024-04-24 10:26:29.331621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.747 [2024-04-24 10:26:29.331629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.747 [2024-04-24 10:26:29.331636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.747 [2024-04-24 10:26:29.331643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.747 [2024-04-24 10:26:29.331650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.747 [2024-04-24 10:26:29.331656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:20.747 [2024-04-24 10:26:29.331663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:20.747 [2024-04-24 10:26:29.331669] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:20.747 [2024-04-24 10:26:29.331691] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:20.747 [2024-04-24 10:26:29.331705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227b010 (9): Bad file descriptor 00:31:20.747 [2024-04-24 10:26:29.474185] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:20.747 Running I/O for 1 seconds... 00:31:20.747 00:31:20.747 Latency(us) 00:31:20.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.747 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:20.747 Verification LBA range: start 0x0 length 0x4000 00:31:20.747 NVMe0n1 : 1.01 16631.76 64.97 0.00 0.00 7663.90 983.04 13734.07 00:31:20.747 =================================================================================================================== 00:31:20.747 Total : 16631.76 64.97 0.00 0.00 7663.90 983.04 13734.07 00:31:20.747 10:26:33 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:20.747 10:26:33 -- host/failover.sh@95 -- # grep -q NVMe0 00:31:20.747 10:26:33 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:21.005 10:26:34 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:21.005 10:26:34 -- host/failover.sh@99 -- # grep -q NVMe0 00:31:21.005 10:26:34 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:21.264 10:26:34 -- host/failover.sh@101 -- # sleep 3 00:31:24.552 10:26:37 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:24.552 10:26:37 -- host/failover.sh@103 -- # grep -q NVMe0 00:31:24.552 10:26:37 -- host/failover.sh@108 -- # killprocess 467974 00:31:24.552 10:26:37 -- common/autotest_common.sh@926 -- # '[' -z 467974 ']' 00:31:24.552 10:26:37 -- common/autotest_common.sh@930 -- # kill -0 467974 00:31:24.552 10:26:37 -- common/autotest_common.sh@931 -- # uname 00:31:24.552 10:26:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:24.552 10:26:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 467974 00:31:24.552 10:26:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:24.552 10:26:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:24.552 10:26:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 467974' 00:31:24.552 killing process with pid 467974 00:31:24.552 10:26:37 -- common/autotest_common.sh@945 -- # kill 467974 00:31:24.552 10:26:37 -- common/autotest_common.sh@950 -- # wait 467974 00:31:24.811 10:26:37 -- host/failover.sh@110 -- # sync 00:31:24.811 10:26:37 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.811 10:26:38 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:24.811 10:26:38 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:24.811 10:26:38 -- host/failover.sh@116 -- # nvmftestfini 00:31:24.811 10:26:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:24.811 10:26:38 -- nvmf/common.sh@116 -- # sync 00:31:24.811 10:26:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:24.811 10:26:38 -- nvmf/common.sh@119 -- # set +e 00:31:24.811 10:26:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:24.811 10:26:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:24.811 rmmod nvme_tcp 00:31:24.811 rmmod nvme_fabrics 00:31:24.811 rmmod nvme_keyring 00:31:25.070 10:26:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:25.070 10:26:38 -- nvmf/common.sh@123 -- # set -e 00:31:25.070 10:26:38 -- nvmf/common.sh@124 -- # return 0 00:31:25.070 10:26:38 -- nvmf/common.sh@477 -- # '[' -n 464711 ']' 00:31:25.070 10:26:38 -- nvmf/common.sh@478 -- # killprocess 464711 00:31:25.070 10:26:38 -- common/autotest_common.sh@926 -- # '[' -z 464711 ']' 00:31:25.070 10:26:38 -- common/autotest_common.sh@930 -- # kill -0 464711 00:31:25.070 10:26:38 -- common/autotest_common.sh@931 -- # uname 00:31:25.070 10:26:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:25.070 10:26:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 464711 00:31:25.070 10:26:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:25.070 10:26:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:25.070 10:26:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 464711' 00:31:25.070 killing process with pid 464711 00:31:25.070 10:26:38 -- common/autotest_common.sh@945 -- # kill 464711 00:31:25.070 10:26:38 -- common/autotest_common.sh@950 -- # wait 464711 00:31:25.329 10:26:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:25.329 10:26:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:25.329 10:26:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:25.329 10:26:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:25.329 10:26:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:25.329 10:26:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.329 10:26:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.329 10:26:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.235 10:26:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:27.235 00:31:27.235 real 0m38.019s 00:31:27.235 user 2m2.527s 00:31:27.235 sys 0m7.504s 00:31:27.235 10:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.235 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:27.235 ************************************ 00:31:27.235 END TEST nvmf_failover 00:31:27.235 ************************************ 00:31:27.235 10:26:40 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:27.236 10:26:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:27.236 10:26:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:27.236 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:27.236 ************************************ 00:31:27.236 START TEST nvmf_discovery 00:31:27.236 ************************************ 00:31:27.236 10:26:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:27.495 * Looking for test storage... 00:31:27.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.495 10:26:40 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.495 10:26:40 -- nvmf/common.sh@7 -- # uname -s 00:31:27.495 10:26:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.495 10:26:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.495 10:26:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.495 10:26:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.495 10:26:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.495 10:26:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.495 10:26:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.495 10:26:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.495 10:26:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.495 10:26:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.495 10:26:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.495 10:26:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.495 10:26:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.495 10:26:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.495 10:26:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.495 10:26:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.495 10:26:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.495 10:26:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.495 10:26:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.495 10:26:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.495 10:26:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.495 10:26:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.495 10:26:40 -- paths/export.sh@5 -- # export PATH 00:31:27.495 10:26:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.495 10:26:40 -- nvmf/common.sh@46 -- # : 0 00:31:27.495 10:26:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:27.495 10:26:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:27.495 10:26:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:27.495 10:26:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.495 10:26:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.495 10:26:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:27.495 10:26:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:27.495 10:26:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:27.495 10:26:40 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:27.495 10:26:40 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:27.495 10:26:40 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:27.495 10:26:40 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:27.495 10:26:40 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:27.495 10:26:40 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:27.495 10:26:40 -- host/discovery.sh@25 -- # nvmftestinit 00:31:27.495 10:26:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:27.495 10:26:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.495 10:26:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:27.495 10:26:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:27.495 10:26:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:27.495 10:26:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.495 10:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:27.495 10:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.495 10:26:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:27.495 10:26:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:27.495 10:26:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:27.495 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:31:32.770 10:26:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:32.770 10:26:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:32.770 10:26:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:32.770 10:26:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:32.770 10:26:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:32.770 10:26:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:32.770 10:26:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:32.770 10:26:45 -- nvmf/common.sh@294 -- # net_devs=() 00:31:32.770 10:26:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:32.770 10:26:45 -- nvmf/common.sh@295 -- # e810=() 00:31:32.770 10:26:45 -- nvmf/common.sh@295 -- # local -ga e810 00:31:32.770 10:26:45 -- nvmf/common.sh@296 -- # x722=() 00:31:32.770 10:26:45 -- nvmf/common.sh@296 -- # local -ga x722 00:31:32.770 10:26:45 -- nvmf/common.sh@297 -- # mlx=() 00:31:32.770 10:26:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:32.770 10:26:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.770 10:26:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:32.770 10:26:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:32.770 10:26:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:32.770 10:26:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:32.770 10:26:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:32.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:32.770 10:26:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:32.770 10:26:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:32.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:32.770 10:26:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:32.770 10:26:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:32.770 10:26:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.770 10:26:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:32.770 10:26:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.770 10:26:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:32.770 Found net devices under 0000:86:00.0: cvl_0_0 00:31:32.770 10:26:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.770 10:26:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:32.770 10:26:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.770 10:26:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:32.770 10:26:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.770 10:26:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:32.770 Found net devices under 0000:86:00.1: cvl_0_1 00:31:32.770 10:26:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.770 10:26:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:32.770 10:26:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:32.770 10:26:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:32.770 10:26:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.770 10:26:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.770 10:26:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.770 10:26:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:32.770 10:26:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.770 10:26:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.770 10:26:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:32.770 10:26:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.770 10:26:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.770 10:26:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:32.770 10:26:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:32.770 10:26:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.770 10:26:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.770 10:26:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.770 10:26:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.770 10:26:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:32.770 10:26:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.770 10:26:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.770 10:26:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.770 10:26:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:32.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:31:32.770 00:31:32.770 --- 10.0.0.2 ping statistics --- 00:31:32.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.770 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:31:32.770 10:26:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:31:32.770 00:31:32.770 --- 10.0.0.1 ping statistics --- 00:31:32.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.770 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:31:32.770 10:26:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.770 10:26:45 -- nvmf/common.sh@410 -- # return 0 00:31:32.770 10:26:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:32.770 10:26:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.770 10:26:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:32.770 10:26:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.770 10:26:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:32.770 10:26:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:32.770 10:26:45 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:32.770 10:26:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:32.770 10:26:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:32.770 10:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:32.770 10:26:45 -- nvmf/common.sh@469 -- # nvmfpid=473079 00:31:32.770 10:26:45 -- nvmf/common.sh@470 -- # waitforlisten 473079 00:31:32.770 10:26:45 -- common/autotest_common.sh@819 -- # '[' -z 473079 ']' 00:31:32.770 10:26:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.770 10:26:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:32.770 10:26:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.771 10:26:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:32.771 10:26:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:32.771 10:26:45 -- common/autotest_common.sh@10 -- # set +x 00:31:32.771 [2024-04-24 10:26:45.407723] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:32.771 [2024-04-24 10:26:45.407765] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.771 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.771 [2024-04-24 10:26:45.466868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.771 [2024-04-24 10:26:45.543867] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:32.771 [2024-04-24 10:26:45.543986] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.771 [2024-04-24 10:26:45.543997] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.771 [2024-04-24 10:26:45.544004] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.771 [2024-04-24 10:26:45.544018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.030 10:26:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:33.030 10:26:46 -- common/autotest_common.sh@852 -- # return 0 00:31:33.030 10:26:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:33.030 10:26:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:33.030 10:26:46 -- common/autotest_common.sh@10 -- # set +x 00:31:33.030 10:26:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:33.030 10:26:46 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:33.030 10:26:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.030 10:26:46 -- common/autotest_common.sh@10 -- # set +x 00:31:33.030 [2024-04-24 10:26:46.234329] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.030 10:26:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.030 10:26:46 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:33.030 10:26:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.030 10:26:46 -- common/autotest_common.sh@10 -- # set +x 00:31:33.030 [2024-04-24 10:26:46.242457] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:33.030 10:26:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.030 10:26:46 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:33.030 10:26:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.030 10:26:46 -- common/autotest_common.sh@10 -- # set +x 00:31:33.030 null0 00:31:33.030 10:26:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.030 10:26:46 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:33.030 10:26:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.030 10:26:46 -- common/autotest_common.sh@10 -- # set +x 00:31:33.030 null1 00:31:33.030 10:26:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.030 10:26:46 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:33.030 10:26:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.030 10:26:46 -- common/autotest_common.sh@10 -- # set +x 00:31:33.030 10:26:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.030 10:26:46 -- host/discovery.sh@45 -- # hostpid=473191 00:31:33.030 10:26:46 -- host/discovery.sh@46 -- # waitforlisten 473191 /tmp/host.sock 00:31:33.030 10:26:46 -- common/autotest_common.sh@819 -- # '[' -z 473191 ']' 00:31:33.030 10:26:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:33.030 10:26:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:33.030 10:26:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:33.030 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:33.030 10:26:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:33.030 10:26:46 -- common/autotest_common.sh@10 -- # set +x 00:31:33.030 10:26:46 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:33.290 [2024-04-24 10:26:46.314005] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:33.290 [2024-04-24 10:26:46.314047] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473191 ] 00:31:33.290 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.290 [2024-04-24 10:26:46.367091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.290 [2024-04-24 10:26:46.445552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:33.290 [2024-04-24 10:26:46.445662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.856 10:26:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:33.856 10:26:47 -- common/autotest_common.sh@852 -- # return 0 00:31:33.856 10:26:47 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:33.856 10:26:47 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:33.857 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.857 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:33.857 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.857 10:26:47 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:33.857 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.857 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:33.857 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.857 10:26:47 -- host/discovery.sh@72 -- # notify_id=0 00:31:33.857 10:26:47 -- host/discovery.sh@78 -- # get_subsystem_names 00:31:33.857 10:26:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:33.857 10:26:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:33.857 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.857 10:26:47 -- host/discovery.sh@59 -- # sort 00:31:33.857 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:33.857 10:26:47 -- host/discovery.sh@59 -- # xargs 00:31:34.115 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.115 10:26:47 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:31:34.115 10:26:47 -- host/discovery.sh@79 -- # get_bdev_list 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.115 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # sort 00:31:34.115 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # xargs 00:31:34.115 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.115 10:26:47 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:31:34.115 10:26:47 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:34.115 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.115 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.115 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.115 10:26:47 -- host/discovery.sh@82 -- # get_subsystem_names 00:31:34.115 10:26:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:34.115 10:26:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:34.115 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.115 10:26:47 -- host/discovery.sh@59 -- # sort 00:31:34.115 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.115 10:26:47 -- host/discovery.sh@59 -- # xargs 00:31:34.115 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.115 10:26:47 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:31:34.115 10:26:47 -- host/discovery.sh@83 -- # get_bdev_list 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.115 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # sort 00:31:34.115 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # xargs 00:31:34.115 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.115 10:26:47 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:34.115 10:26:47 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:34.115 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.115 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.115 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.115 10:26:47 -- host/discovery.sh@86 -- # get_subsystem_names 00:31:34.115 10:26:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:34.115 10:26:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:34.115 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.115 10:26:47 -- host/discovery.sh@59 -- # sort 00:31:34.115 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.115 10:26:47 -- host/discovery.sh@59 -- # xargs 00:31:34.115 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.115 10:26:47 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:31:34.115 10:26:47 -- host/discovery.sh@87 -- # get_bdev_list 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.115 10:26:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.116 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.116 10:26:47 -- host/discovery.sh@55 -- # sort 00:31:34.116 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.116 10:26:47 -- host/discovery.sh@55 -- # xargs 00:31:34.374 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.374 10:26:47 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:34.374 10:26:47 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:34.374 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.374 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.374 [2024-04-24 10:26:47.433638] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.374 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.374 10:26:47 -- host/discovery.sh@92 -- # get_subsystem_names 00:31:34.374 10:26:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:34.374 10:26:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:34.374 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.374 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.374 10:26:47 -- host/discovery.sh@59 -- # sort 00:31:34.374 10:26:47 -- host/discovery.sh@59 -- # xargs 00:31:34.374 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.374 10:26:47 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:34.374 10:26:47 -- host/discovery.sh@93 -- # get_bdev_list 00:31:34.374 10:26:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.374 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.374 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.374 10:26:47 -- host/discovery.sh@55 -- # xargs 00:31:34.374 10:26:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.374 10:26:47 -- host/discovery.sh@55 -- # sort 00:31:34.374 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.374 10:26:47 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:31:34.374 10:26:47 -- host/discovery.sh@94 -- # get_notification_count 00:31:34.374 10:26:47 -- host/discovery.sh@74 -- # jq '. | length' 00:31:34.374 10:26:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:34.374 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.374 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.374 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.374 10:26:47 -- host/discovery.sh@74 -- # notification_count=0 00:31:34.374 10:26:47 -- host/discovery.sh@75 -- # notify_id=0 00:31:34.374 10:26:47 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:31:34.374 10:26:47 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:34.374 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.374 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.374 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.374 10:26:47 -- host/discovery.sh@100 -- # sleep 1 00:31:34.941 [2024-04-24 10:26:48.172219] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:34.941 [2024-04-24 10:26:48.172237] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:34.941 [2024-04-24 10:26:48.172252] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:35.200 [2024-04-24 10:26:48.258517] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:35.200 [2024-04-24 10:26:48.314234] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:35.200 [2024-04-24 10:26:48.314254] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:35.458 10:26:48 -- host/discovery.sh@101 -- # get_subsystem_names 00:31:35.458 10:26:48 -- host/discovery.sh@59 -- # sort 00:31:35.458 10:26:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:35.458 10:26:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:35.458 10:26:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.458 10:26:48 -- common/autotest_common.sh@10 -- # set +x 00:31:35.458 10:26:48 -- host/discovery.sh@59 -- # xargs 00:31:35.458 10:26:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.458 10:26:48 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.458 10:26:48 -- host/discovery.sh@102 -- # get_bdev_list 00:31:35.458 10:26:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.458 10:26:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:35.458 10:26:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.458 10:26:48 -- host/discovery.sh@55 -- # sort 00:31:35.458 10:26:48 -- common/autotest_common.sh@10 -- # set +x 00:31:35.458 10:26:48 -- host/discovery.sh@55 -- # xargs 00:31:35.458 10:26:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.458 10:26:48 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:35.458 10:26:48 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:31:35.458 10:26:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:35.458 10:26:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.458 10:26:48 -- common/autotest_common.sh@10 -- # set +x 00:31:35.458 10:26:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:35.458 10:26:48 -- host/discovery.sh@63 -- # sort -n 00:31:35.458 10:26:48 -- host/discovery.sh@63 -- # xargs 00:31:35.458 10:26:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.458 10:26:48 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:31:35.458 10:26:48 -- host/discovery.sh@104 -- # get_notification_count 00:31:35.458 10:26:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:35.458 10:26:48 -- host/discovery.sh@74 -- # jq '. | length' 00:31:35.458 10:26:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.458 10:26:48 -- common/autotest_common.sh@10 -- # set +x 00:31:35.717 10:26:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.717 10:26:48 -- host/discovery.sh@74 -- # notification_count=1 00:31:35.717 10:26:48 -- host/discovery.sh@75 -- # notify_id=1 00:31:35.717 10:26:48 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:31:35.717 10:26:48 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:35.717 10:26:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.717 10:26:48 -- common/autotest_common.sh@10 -- # set +x 00:31:35.717 10:26:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.717 10:26:48 -- host/discovery.sh@109 -- # sleep 1 00:31:36.652 10:26:49 -- host/discovery.sh@110 -- # get_bdev_list 00:31:36.652 10:26:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.652 10:26:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:36.652 10:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.652 10:26:49 -- common/autotest_common.sh@10 -- # set +x 00:31:36.652 10:26:49 -- host/discovery.sh@55 -- # sort 00:31:36.652 10:26:49 -- host/discovery.sh@55 -- # xargs 00:31:36.652 10:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.652 10:26:49 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:36.652 10:26:49 -- host/discovery.sh@111 -- # get_notification_count 00:31:36.652 10:26:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:36.652 10:26:49 -- host/discovery.sh@74 -- # jq '. | length' 00:31:36.652 10:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.652 10:26:49 -- common/autotest_common.sh@10 -- # set +x 00:31:36.652 10:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.652 10:26:49 -- host/discovery.sh@74 -- # notification_count=1 00:31:36.652 10:26:49 -- host/discovery.sh@75 -- # notify_id=2 00:31:36.652 10:26:49 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:31:36.652 10:26:49 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:36.652 10:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.652 10:26:49 -- common/autotest_common.sh@10 -- # set +x 00:31:36.652 [2024-04-24 10:26:49.884433] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:36.652 [2024-04-24 10:26:49.885143] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:36.652 [2024-04-24 10:26:49.885166] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:36.652 10:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.652 10:26:49 -- host/discovery.sh@117 -- # sleep 1 00:31:36.911 [2024-04-24 10:26:50.012545] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:36.911 [2024-04-24 10:26:50.070149] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:36.911 [2024-04-24 10:26:50.070167] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:36.911 [2024-04-24 10:26:50.070172] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:37.848 10:26:50 -- host/discovery.sh@118 -- # get_subsystem_names 00:31:37.848 10:26:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:37.848 10:26:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:37.848 10:26:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.848 10:26:50 -- host/discovery.sh@59 -- # sort 00:31:37.848 10:26:50 -- common/autotest_common.sh@10 -- # set +x 00:31:37.849 10:26:50 -- host/discovery.sh@59 -- # xargs 00:31:37.849 10:26:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.849 10:26:50 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.849 10:26:50 -- host/discovery.sh@119 -- # get_bdev_list 00:31:37.849 10:26:50 -- host/discovery.sh@55 -- # sort 00:31:37.849 10:26:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.849 10:26:50 -- host/discovery.sh@55 -- # xargs 00:31:37.849 10:26:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:37.849 10:26:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.849 10:26:50 -- common/autotest_common.sh@10 -- # set +x 00:31:37.849 10:26:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.849 10:26:50 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:37.849 10:26:50 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:31:37.849 10:26:50 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:37.849 10:26:50 -- host/discovery.sh@63 -- # xargs 00:31:37.849 10:26:50 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:37.849 10:26:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.849 10:26:50 -- host/discovery.sh@63 -- # sort -n 00:31:37.849 10:26:50 -- common/autotest_common.sh@10 -- # set +x 00:31:37.849 10:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.849 10:26:51 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:37.849 10:26:51 -- host/discovery.sh@121 -- # get_notification_count 00:31:37.849 10:26:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:37.849 10:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.849 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:31:37.849 10:26:51 -- host/discovery.sh@74 -- # jq '. | length' 00:31:37.849 10:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.849 10:26:51 -- host/discovery.sh@74 -- # notification_count=0 00:31:37.849 10:26:51 -- host/discovery.sh@75 -- # notify_id=2 00:31:37.849 10:26:51 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:31:37.849 10:26:51 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:37.849 10:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.849 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:31:37.849 [2024-04-24 10:26:51.068333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.849 [2024-04-24 10:26:51.068360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.849 [2024-04-24 10:26:51.068369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.849 [2024-04-24 10:26:51.068376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.849 [2024-04-24 10:26:51.068383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.849 [2024-04-24 10:26:51.068389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.849 [2024-04-24 10:26:51.068397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.849 [2024-04-24 10:26:51.068407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.849 [2024-04-24 10:26:51.068413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73c020 is same with the state(5) to be set 00:31:37.849 [2024-04-24 10:26:51.068958] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:37.849 [2024-04-24 10:26:51.068972] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:37.849 10:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.849 10:26:51 -- host/discovery.sh@127 -- # sleep 1 00:31:37.849 [2024-04-24 10:26:51.078342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c020 (9): Bad file descriptor 00:31:37.849 [2024-04-24 10:26:51.088381] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.849 [2024-04-24 10:26:51.088676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.849 [2024-04-24 10:26:51.088966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.849 [2024-04-24 10:26:51.088978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73c020 with addr=10.0.0.2, port=4420 00:31:37.849 [2024-04-24 10:26:51.088985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73c020 is same with the state(5) to be set 00:31:37.849 [2024-04-24 10:26:51.088996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c020 (9): Bad file descriptor 00:31:37.849 [2024-04-24 10:26:51.089006] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.849 [2024-04-24 10:26:51.089013] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.849 [2024-04-24 10:26:51.089021] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.849 [2024-04-24 10:26:51.089039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.849 [2024-04-24 10:26:51.098436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.849 [2024-04-24 10:26:51.098630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.849 [2024-04-24 10:26:51.098921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.849 [2024-04-24 10:26:51.098933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73c020 with addr=10.0.0.2, port=4420 00:31:37.849 [2024-04-24 10:26:51.098940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73c020 is same with the state(5) to be set 00:31:37.849 [2024-04-24 10:26:51.098951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c020 (9): Bad file descriptor 00:31:37.849 [2024-04-24 10:26:51.098962] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.849 [2024-04-24 10:26:51.098968] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.849 [2024-04-24 10:26:51.098975] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.849 [2024-04-24 10:26:51.098985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.849 [2024-04-24 10:26:51.108486] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.849 [2024-04-24 10:26:51.108716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.849 [2024-04-24 10:26:51.108932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.849 [2024-04-24 10:26:51.108942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73c020 with addr=10.0.0.2, port=4420 00:31:37.849 [2024-04-24 10:26:51.108949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73c020 is same with the state(5) to be set 00:31:37.849 [2024-04-24 10:26:51.108963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c020 (9): Bad file descriptor 00:31:37.849 [2024-04-24 10:26:51.108973] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.849 [2024-04-24 10:26:51.108979] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.849 [2024-04-24 10:26:51.108986] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.849 [2024-04-24 10:26:51.108995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:37.849 [2024-04-24 10:26:51.118536] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:37.849 [2024-04-24 10:26:51.118731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.849 [2024-04-24 10:26:51.119001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:37.849 [2024-04-24 10:26:51.119013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73c020 with addr=10.0.0.2, port=4420 00:31:37.849 [2024-04-24 10:26:51.119020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73c020 is same with the state(5) to be set 00:31:37.849 [2024-04-24 10:26:51.119030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c020 (9): Bad file descriptor 00:31:37.849 [2024-04-24 10:26:51.119040] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:37.849 [2024-04-24 10:26:51.119046] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:37.849 [2024-04-24 10:26:51.119053] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:37.849 [2024-04-24 10:26:51.119062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.109 [2024-04-24 10:26:51.128589] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:38.109 [2024-04-24 10:26:51.128911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.109 [2024-04-24 10:26:51.129204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.109 [2024-04-24 10:26:51.129216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73c020 with addr=10.0.0.2, port=4420 00:31:38.109 [2024-04-24 10:26:51.129223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73c020 is same with the state(5) to be set 00:31:38.109 [2024-04-24 10:26:51.129234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c020 (9): Bad file descriptor 00:31:38.109 [2024-04-24 10:26:51.129250] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:38.109 [2024-04-24 10:26:51.129257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:38.109 [2024-04-24 10:26:51.129263] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:38.109 [2024-04-24 10:26:51.129273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.109 [2024-04-24 10:26:51.138639] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:38.109 [2024-04-24 10:26:51.138967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.109 [2024-04-24 10:26:51.139205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.109 [2024-04-24 10:26:51.139217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73c020 with addr=10.0.0.2, port=4420 00:31:38.109 [2024-04-24 10:26:51.139224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73c020 is same with the state(5) to be set 00:31:38.109 [2024-04-24 10:26:51.139234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c020 (9): Bad file descriptor 00:31:38.109 [2024-04-24 10:26:51.139254] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:38.109 [2024-04-24 10:26:51.139262] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:38.109 [2024-04-24 10:26:51.139268] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:38.109 [2024-04-24 10:26:51.139277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.109 [2024-04-24 10:26:51.148686] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:38.109 [2024-04-24 10:26:51.149026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.109 [2024-04-24 10:26:51.149320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.109 [2024-04-24 10:26:51.149332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x73c020 with addr=10.0.0.2, port=4420 00:31:38.109 [2024-04-24 10:26:51.149339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x73c020 is same with the state(5) to be set 00:31:38.109 [2024-04-24 10:26:51.149349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73c020 (9): Bad file descriptor 00:31:38.109 [2024-04-24 10:26:51.149365] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:38.109 [2024-04-24 10:26:51.149372] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:38.109 [2024-04-24 10:26:51.149378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:38.109 [2024-04-24 10:26:51.149388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.109 [2024-04-24 10:26:51.157391] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:38.109 [2024-04-24 10:26:51.157405] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:39.045 10:26:52 -- host/discovery.sh@128 -- # get_subsystem_names 00:31:39.045 10:26:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:39.045 10:26:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:39.045 10:26:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.045 10:26:52 -- host/discovery.sh@59 -- # sort 00:31:39.045 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:31:39.045 10:26:52 -- host/discovery.sh@59 -- # xargs 00:31:39.045 10:26:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.045 10:26:52 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.045 10:26:52 -- host/discovery.sh@129 -- # get_bdev_list 00:31:39.045 10:26:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.045 10:26:52 -- host/discovery.sh@55 -- # xargs 00:31:39.045 10:26:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:39.045 10:26:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.045 10:26:52 -- host/discovery.sh@55 -- # sort 00:31:39.045 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:31:39.045 10:26:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.045 10:26:52 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:39.045 10:26:52 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:31:39.045 10:26:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:39.045 10:26:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:39.045 10:26:52 -- host/discovery.sh@63 -- # xargs 00:31:39.045 10:26:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.045 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:31:39.045 10:26:52 -- host/discovery.sh@63 -- # sort -n 00:31:39.045 10:26:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.045 10:26:52 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:31:39.045 10:26:52 -- host/discovery.sh@131 -- # get_notification_count 00:31:39.045 10:26:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:39.045 10:26:52 -- host/discovery.sh@74 -- # jq '. | length' 00:31:39.045 10:26:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.045 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:31:39.045 10:26:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.045 10:26:52 -- host/discovery.sh@74 -- # notification_count=0 00:31:39.045 10:26:52 -- host/discovery.sh@75 -- # notify_id=2 00:31:39.045 10:26:52 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:31:39.045 10:26:52 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:39.045 10:26:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.045 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:31:39.045 10:26:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.045 10:26:52 -- host/discovery.sh@135 -- # sleep 1 00:31:40.426 10:26:53 -- host/discovery.sh@136 -- # get_subsystem_names 00:31:40.426 10:26:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:40.426 10:26:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:40.426 10:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.426 10:26:53 -- host/discovery.sh@59 -- # sort 00:31:40.426 10:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:40.426 10:26:53 -- host/discovery.sh@59 -- # xargs 00:31:40.426 10:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.426 10:26:53 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:31:40.426 10:26:53 -- host/discovery.sh@137 -- # get_bdev_list 00:31:40.426 10:26:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:40.426 10:26:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:40.426 10:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.426 10:26:53 -- host/discovery.sh@55 -- # sort 00:31:40.426 10:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:40.426 10:26:53 -- host/discovery.sh@55 -- # xargs 00:31:40.426 10:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.426 10:26:53 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:31:40.426 10:26:53 -- host/discovery.sh@138 -- # get_notification_count 00:31:40.426 10:26:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:40.426 10:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.426 10:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:40.426 10:26:53 -- host/discovery.sh@74 -- # jq '. | length' 00:31:40.426 10:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:40.426 10:26:53 -- host/discovery.sh@74 -- # notification_count=2 00:31:40.426 10:26:53 -- host/discovery.sh@75 -- # notify_id=4 00:31:40.426 10:26:53 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:31:40.426 10:26:53 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:40.426 10:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.426 10:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:41.362 [2024-04-24 10:26:54.438039] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:41.362 [2024-04-24 10:26:54.438055] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:41.362 [2024-04-24 10:26:54.438067] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:41.362 [2024-04-24 10:26:54.525331] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:41.362 [2024-04-24 10:26:54.624862] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:41.362 [2024-04-24 10:26:54.624888] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:41.363 10:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.363 10:26:54 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:41.363 10:26:54 -- common/autotest_common.sh@640 -- # local es=0 00:31:41.363 10:26:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:41.363 10:26:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:41.363 10:26:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:41.363 10:26:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:41.363 10:26:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:41.363 10:26:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:41.363 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.363 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:31:41.622 request: 00:31:41.622 { 00:31:41.622 "name": "nvme", 00:31:41.622 "trtype": "tcp", 00:31:41.622 "traddr": "10.0.0.2", 00:31:41.622 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:41.622 "adrfam": "ipv4", 00:31:41.622 "trsvcid": "8009", 00:31:41.622 "wait_for_attach": true, 00:31:41.622 "method": "bdev_nvme_start_discovery", 00:31:41.622 "req_id": 1 00:31:41.622 } 00:31:41.622 Got JSON-RPC error response 00:31:41.622 response: 00:31:41.622 { 00:31:41.622 "code": -17, 00:31:41.622 "message": "File exists" 00:31:41.622 } 00:31:41.622 10:26:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:41.622 10:26:54 -- common/autotest_common.sh@643 -- # es=1 00:31:41.622 10:26:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:41.622 10:26:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:41.622 10:26:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:41.622 10:26:54 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:31:41.622 10:26:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:41.622 10:26:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:41.622 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.622 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:31:41.622 10:26:54 -- host/discovery.sh@67 -- # sort 00:31:41.622 10:26:54 -- host/discovery.sh@67 -- # xargs 00:31:41.622 10:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.622 10:26:54 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:31:41.622 10:26:54 -- host/discovery.sh@147 -- # get_bdev_list 00:31:41.622 10:26:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:41.622 10:26:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:41.622 10:26:54 -- host/discovery.sh@55 -- # xargs 00:31:41.622 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.622 10:26:54 -- host/discovery.sh@55 -- # sort 00:31:41.622 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:31:41.622 10:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.622 10:26:54 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:41.622 10:26:54 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:41.622 10:26:54 -- common/autotest_common.sh@640 -- # local es=0 00:31:41.622 10:26:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:41.622 10:26:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:41.622 10:26:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:41.622 10:26:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:41.622 10:26:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:41.622 10:26:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:41.622 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.622 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:31:41.622 request: 00:31:41.622 { 00:31:41.622 "name": "nvme_second", 00:31:41.622 "trtype": "tcp", 00:31:41.622 "traddr": "10.0.0.2", 00:31:41.622 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:41.622 "adrfam": "ipv4", 00:31:41.622 "trsvcid": "8009", 00:31:41.622 "wait_for_attach": true, 00:31:41.622 "method": "bdev_nvme_start_discovery", 00:31:41.622 "req_id": 1 00:31:41.622 } 00:31:41.622 Got JSON-RPC error response 00:31:41.622 response: 00:31:41.622 { 00:31:41.622 "code": -17, 00:31:41.622 "message": "File exists" 00:31:41.622 } 00:31:41.622 10:26:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:41.622 10:26:54 -- common/autotest_common.sh@643 -- # es=1 00:31:41.622 10:26:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:41.622 10:26:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:41.622 10:26:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:41.622 10:26:54 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:31:41.623 10:26:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:41.623 10:26:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:41.623 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.623 10:26:54 -- host/discovery.sh@67 -- # sort 00:31:41.623 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:31:41.623 10:26:54 -- host/discovery.sh@67 -- # xargs 00:31:41.623 10:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.623 10:26:54 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:31:41.623 10:26:54 -- host/discovery.sh@153 -- # get_bdev_list 00:31:41.623 10:26:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:41.623 10:26:54 -- host/discovery.sh@55 -- # xargs 00:31:41.623 10:26:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:41.623 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.623 10:26:54 -- host/discovery.sh@55 -- # sort 00:31:41.623 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:31:41.623 10:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:41.623 10:26:54 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:41.623 10:26:54 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:41.623 10:26:54 -- common/autotest_common.sh@640 -- # local es=0 00:31:41.623 10:26:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:41.623 10:26:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:41.623 10:26:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:41.623 10:26:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:41.623 10:26:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:41.623 10:26:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:41.623 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:41.623 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:31:43.000 [2024-04-24 10:26:55.861379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.000 [2024-04-24 10:26:55.861703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.000 [2024-04-24 10:26:55.861718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x756770 with addr=10.0.0.2, port=8010 00:31:43.000 [2024-04-24 10:26:55.861731] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:43.000 [2024-04-24 10:26:55.861738] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:43.000 [2024-04-24 10:26:55.861745] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:43.947 [2024-04-24 10:26:56.863827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.947 [2024-04-24 10:26:56.864128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:43.947 [2024-04-24 10:26:56.864141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76e2e0 with addr=10.0.0.2, port=8010 00:31:43.947 [2024-04-24 10:26:56.864154] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:43.947 [2024-04-24 10:26:56.864160] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:43.947 [2024-04-24 10:26:56.864166] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:44.884 [2024-04-24 10:26:57.865940] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:44.884 request: 00:31:44.884 { 00:31:44.884 "name": "nvme_second", 00:31:44.884 "trtype": "tcp", 00:31:44.884 "traddr": "10.0.0.2", 00:31:44.884 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:44.884 "adrfam": "ipv4", 00:31:44.884 "trsvcid": "8010", 00:31:44.884 "attach_timeout_ms": 3000, 00:31:44.884 "method": "bdev_nvme_start_discovery", 00:31:44.884 "req_id": 1 00:31:44.884 } 00:31:44.884 Got JSON-RPC error response 00:31:44.884 response: 00:31:44.884 { 00:31:44.884 "code": -110, 00:31:44.884 "message": "Connection timed out" 00:31:44.884 } 00:31:44.884 10:26:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:44.884 10:26:57 -- common/autotest_common.sh@643 -- # es=1 00:31:44.884 10:26:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:44.884 10:26:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:44.884 10:26:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:44.884 10:26:57 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:31:44.884 10:26:57 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:44.884 10:26:57 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:44.884 10:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.884 10:26:57 -- host/discovery.sh@67 -- # sort 00:31:44.884 10:26:57 -- common/autotest_common.sh@10 -- # set +x 00:31:44.884 10:26:57 -- host/discovery.sh@67 -- # xargs 00:31:44.884 10:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.884 10:26:57 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:31:44.884 10:26:57 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:31:44.884 10:26:57 -- host/discovery.sh@162 -- # kill 473191 00:31:44.884 10:26:57 -- host/discovery.sh@163 -- # nvmftestfini 00:31:44.884 10:26:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:44.884 10:26:57 -- nvmf/common.sh@116 -- # sync 00:31:44.884 10:26:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:44.884 10:26:57 -- nvmf/common.sh@119 -- # set +e 00:31:44.884 10:26:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:44.884 10:26:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:44.884 rmmod nvme_tcp 00:31:44.884 rmmod nvme_fabrics 00:31:44.884 rmmod nvme_keyring 00:31:44.884 10:26:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:44.884 10:26:57 -- nvmf/common.sh@123 -- # set -e 00:31:44.884 10:26:57 -- nvmf/common.sh@124 -- # return 0 00:31:44.884 10:26:57 -- nvmf/common.sh@477 -- # '[' -n 473079 ']' 00:31:44.884 10:26:57 -- nvmf/common.sh@478 -- # killprocess 473079 00:31:44.884 10:26:57 -- common/autotest_common.sh@926 -- # '[' -z 473079 ']' 00:31:44.884 10:26:57 -- common/autotest_common.sh@930 -- # kill -0 473079 00:31:44.884 10:26:58 -- common/autotest_common.sh@931 -- # uname 00:31:44.884 10:26:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:44.884 10:26:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 473079 00:31:44.884 10:26:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:44.884 10:26:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:44.884 10:26:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 473079' 00:31:44.884 killing process with pid 473079 00:31:44.884 10:26:58 -- common/autotest_common.sh@945 -- # kill 473079 00:31:44.884 10:26:58 -- common/autotest_common.sh@950 -- # wait 473079 00:31:45.143 10:26:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:45.143 10:26:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:45.143 10:26:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:45.143 10:26:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:45.143 10:26:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:45.143 10:26:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.143 10:26:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:45.143 10:26:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.048 10:27:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:47.049 00:31:47.049 real 0m19.827s 00:31:47.049 user 0m27.255s 00:31:47.049 sys 0m4.951s 00:31:47.049 10:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.049 10:27:00 -- common/autotest_common.sh@10 -- # set +x 00:31:47.049 ************************************ 00:31:47.049 END TEST nvmf_discovery 00:31:47.049 ************************************ 00:31:47.307 10:27:00 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:47.308 10:27:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:47.308 10:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:47.308 10:27:00 -- common/autotest_common.sh@10 -- # set +x 00:31:47.308 ************************************ 00:31:47.308 START TEST nvmf_discovery_remove_ifc 00:31:47.308 ************************************ 00:31:47.308 10:27:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:47.308 * Looking for test storage... 00:31:47.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:47.308 10:27:00 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.308 10:27:00 -- nvmf/common.sh@7 -- # uname -s 00:31:47.308 10:27:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.308 10:27:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.308 10:27:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.308 10:27:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.308 10:27:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.308 10:27:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.308 10:27:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.308 10:27:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.308 10:27:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.308 10:27:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.308 10:27:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.308 10:27:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.308 10:27:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.308 10:27:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.308 10:27:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.308 10:27:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.308 10:27:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.308 10:27:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.308 10:27:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.308 10:27:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.308 10:27:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.308 10:27:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.308 10:27:00 -- paths/export.sh@5 -- # export PATH 00:31:47.308 10:27:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.308 10:27:00 -- nvmf/common.sh@46 -- # : 0 00:31:47.308 10:27:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:47.308 10:27:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:47.308 10:27:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:47.308 10:27:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.308 10:27:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.308 10:27:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:47.308 10:27:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:47.308 10:27:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:47.308 10:27:00 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:47.308 10:27:00 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:47.308 10:27:00 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:47.308 10:27:00 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:47.308 10:27:00 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:47.308 10:27:00 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:47.308 10:27:00 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:47.308 10:27:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:47.308 10:27:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.308 10:27:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:47.308 10:27:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:47.308 10:27:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:47.308 10:27:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.308 10:27:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:47.308 10:27:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.308 10:27:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:47.308 10:27:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:47.308 10:27:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:47.308 10:27:00 -- common/autotest_common.sh@10 -- # set +x 00:31:52.666 10:27:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:52.666 10:27:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:52.666 10:27:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:52.666 10:27:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:52.666 10:27:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:52.666 10:27:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:52.666 10:27:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:52.666 10:27:05 -- nvmf/common.sh@294 -- # net_devs=() 00:31:52.666 10:27:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:52.666 10:27:05 -- nvmf/common.sh@295 -- # e810=() 00:31:52.666 10:27:05 -- nvmf/common.sh@295 -- # local -ga e810 00:31:52.666 10:27:05 -- nvmf/common.sh@296 -- # x722=() 00:31:52.666 10:27:05 -- nvmf/common.sh@296 -- # local -ga x722 00:31:52.666 10:27:05 -- nvmf/common.sh@297 -- # mlx=() 00:31:52.666 10:27:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:52.666 10:27:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:52.666 10:27:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:52.666 10:27:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:52.666 10:27:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:52.666 10:27:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:52.666 10:27:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:52.666 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:52.666 10:27:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:52.666 10:27:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:52.666 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:52.666 10:27:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:52.666 10:27:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:52.666 10:27:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.666 10:27:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:52.666 10:27:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.666 10:27:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:52.666 Found net devices under 0000:86:00.0: cvl_0_0 00:31:52.666 10:27:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.666 10:27:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:52.666 10:27:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.666 10:27:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:52.666 10:27:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.666 10:27:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:52.666 Found net devices under 0000:86:00.1: cvl_0_1 00:31:52.666 10:27:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.666 10:27:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:52.666 10:27:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:52.666 10:27:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:52.666 10:27:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:52.666 10:27:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:52.666 10:27:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:52.666 10:27:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:52.666 10:27:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:52.666 10:27:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:52.666 10:27:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:52.666 10:27:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:52.666 10:27:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:52.666 10:27:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:52.666 10:27:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:52.666 10:27:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:52.666 10:27:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:52.667 10:27:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:52.667 10:27:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:52.667 10:27:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:52.667 10:27:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:52.667 10:27:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:52.667 10:27:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:52.667 10:27:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:52.667 10:27:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:52.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:52.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:31:52.667 00:31:52.667 --- 10.0.0.2 ping statistics --- 00:31:52.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.667 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:31:52.667 10:27:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:52.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:52.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:31:52.667 00:31:52.667 --- 10.0.0.1 ping statistics --- 00:31:52.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.667 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:31:52.667 10:27:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:52.667 10:27:05 -- nvmf/common.sh@410 -- # return 0 00:31:52.667 10:27:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:52.667 10:27:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:52.667 10:27:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:52.667 10:27:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:52.667 10:27:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:52.667 10:27:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:52.667 10:27:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:52.667 10:27:05 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:52.667 10:27:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:52.667 10:27:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:52.667 10:27:05 -- common/autotest_common.sh@10 -- # set +x 00:31:52.667 10:27:05 -- nvmf/common.sh@469 -- # nvmfpid=478884 00:31:52.667 10:27:05 -- nvmf/common.sh@470 -- # waitforlisten 478884 00:31:52.667 10:27:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:52.667 10:27:05 -- common/autotest_common.sh@819 -- # '[' -z 478884 ']' 00:31:52.667 10:27:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.667 10:27:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:52.667 10:27:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.667 10:27:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:52.667 10:27:05 -- common/autotest_common.sh@10 -- # set +x 00:31:52.667 [2024-04-24 10:27:05.915728] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:52.667 [2024-04-24 10:27:05.915774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:52.667 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.926 [2024-04-24 10:27:05.976033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.926 [2024-04-24 10:27:06.047493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:52.926 [2024-04-24 10:27:06.047602] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:52.926 [2024-04-24 10:27:06.047610] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:52.926 [2024-04-24 10:27:06.047616] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:52.926 [2024-04-24 10:27:06.047632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.493 10:27:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:53.493 10:27:06 -- common/autotest_common.sh@852 -- # return 0 00:31:53.493 10:27:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:53.493 10:27:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:53.493 10:27:06 -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 10:27:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:53.493 10:27:06 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:53.493 10:27:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.493 10:27:06 -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 [2024-04-24 10:27:06.762713] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:53.493 [2024-04-24 10:27:06.770861] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:53.752 null0 00:31:53.752 [2024-04-24 10:27:06.802867] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.752 10:27:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.752 10:27:06 -- host/discovery_remove_ifc.sh@59 -- # hostpid=479134 00:31:53.752 10:27:06 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 479134 /tmp/host.sock 00:31:53.752 10:27:06 -- common/autotest_common.sh@819 -- # '[' -z 479134 ']' 00:31:53.752 10:27:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:53.752 10:27:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:53.752 10:27:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:53.752 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:53.752 10:27:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:53.752 10:27:06 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:53.752 10:27:06 -- common/autotest_common.sh@10 -- # set +x 00:31:53.752 [2024-04-24 10:27:06.867377] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:53.752 [2024-04-24 10:27:06.867419] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479134 ] 00:31:53.752 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.752 [2024-04-24 10:27:06.920862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.752 [2024-04-24 10:27:06.999030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:53.752 [2024-04-24 10:27:06.999147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.689 10:27:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:54.689 10:27:07 -- common/autotest_common.sh@852 -- # return 0 00:31:54.689 10:27:07 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:54.689 10:27:07 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:54.689 10:27:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.689 10:27:07 -- common/autotest_common.sh@10 -- # set +x 00:31:54.689 10:27:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.689 10:27:07 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:54.689 10:27:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.689 10:27:07 -- common/autotest_common.sh@10 -- # set +x 00:31:54.689 10:27:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.689 10:27:07 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:54.689 10:27:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.689 10:27:07 -- common/autotest_common.sh@10 -- # set +x 00:31:55.624 [2024-04-24 10:27:08.801684] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:55.624 [2024-04-24 10:27:08.801704] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:55.624 [2024-04-24 10:27:08.801717] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:55.883 [2024-04-24 10:27:08.930113] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:55.883 [2024-04-24 10:27:09.113172] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:55.883 [2024-04-24 10:27:09.113212] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:55.883 [2024-04-24 10:27:09.113232] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:55.883 [2024-04-24 10:27:09.113244] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:55.883 [2024-04-24 10:27:09.113264] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:55.883 10:27:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.883 10:27:09 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:55.883 10:27:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:55.883 10:27:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.883 10:27:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:55.883 10:27:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.883 10:27:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:55.883 10:27:09 -- common/autotest_common.sh@10 -- # set +x 00:31:55.883 10:27:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:55.883 [2024-04-24 10:27:09.120582] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9148f0 was disconnected and freed. delete nvme_qpair. 00:31:55.883 10:27:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.143 10:27:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.143 10:27:09 -- common/autotest_common.sh@10 -- # set +x 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.143 10:27:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:56.143 10:27:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:57.080 10:27:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:57.080 10:27:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.080 10:27:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:57.080 10:27:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:57.080 10:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.080 10:27:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:57.080 10:27:10 -- common/autotest_common.sh@10 -- # set +x 00:31:57.080 10:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.339 10:27:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:57.339 10:27:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:58.276 10:27:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.276 10:27:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.276 10:27:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.276 10:27:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.276 10:27:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:58.276 10:27:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.276 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:31:58.276 10:27:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:58.276 10:27:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:58.276 10:27:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:59.224 10:27:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.224 10:27:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.224 10:27:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.224 10:27:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.224 10:27:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.224 10:27:12 -- common/autotest_common.sh@10 -- # set +x 00:31:59.224 10:27:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.224 10:27:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.224 10:27:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:59.224 10:27:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:00.633 10:27:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:00.633 10:27:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:00.633 10:27:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.633 10:27:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:00.633 10:27:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.633 10:27:13 -- common/autotest_common.sh@10 -- # set +x 00:32:00.633 10:27:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:00.633 10:27:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.633 10:27:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:00.633 10:27:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.246 10:27:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.504 10:27:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.504 10:27:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.504 10:27:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:01.504 10:27:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.504 10:27:14 -- common/autotest_common.sh@10 -- # set +x 00:32:01.504 10:27:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.504 10:27:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:01.504 [2024-04-24 10:27:14.554913] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:01.504 [2024-04-24 10:27:14.554954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.504 [2024-04-24 10:27:14.554964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.504 [2024-04-24 10:27:14.554973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.504 [2024-04-24 10:27:14.554984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.504 [2024-04-24 10:27:14.554991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.504 [2024-04-24 10:27:14.554997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.504 [2024-04-24 10:27:14.555004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.504 [2024-04-24 10:27:14.555010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.504 [2024-04-24 10:27:14.555018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:01.504 [2024-04-24 10:27:14.555024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:01.504 [2024-04-24 10:27:14.555031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dad50 is same with the state(5) to be set 00:32:01.504 [2024-04-24 10:27:14.564936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dad50 (9): Bad file descriptor 00:32:01.504 10:27:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:01.504 10:27:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.504 [2024-04-24 10:27:14.574976] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:02.440 10:27:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:02.440 10:27:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.440 10:27:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:02.440 [2024-04-24 10:27:15.580893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:02.440 10:27:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:02.440 10:27:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.440 10:27:15 -- common/autotest_common.sh@10 -- # set +x 00:32:02.440 10:27:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.377 [2024-04-24 10:27:16.604092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:03.377 [2024-04-24 10:27:16.604137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dad50 with addr=10.0.0.2, port=4420 00:32:03.377 [2024-04-24 10:27:16.604152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dad50 is same with the state(5) to be set 00:32:03.377 [2024-04-24 10:27:16.604528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dad50 (9): Bad file descriptor 00:32:03.377 [2024-04-24 10:27:16.604555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:03.377 [2024-04-24 10:27:16.604577] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:03.377 [2024-04-24 10:27:16.604601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.377 [2024-04-24 10:27:16.604614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.377 [2024-04-24 10:27:16.604627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.377 [2024-04-24 10:27:16.604636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.377 [2024-04-24 10:27:16.604646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.377 [2024-04-24 10:27:16.604655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.377 [2024-04-24 10:27:16.604671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.377 [2024-04-24 10:27:16.604681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.377 [2024-04-24 10:27:16.604691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.377 [2024-04-24 10:27:16.604701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.377 [2024-04-24 10:27:16.604711] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:03.377 [2024-04-24 10:27:16.605178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8db160 (9): Bad file descriptor 00:32:03.377 [2024-04-24 10:27:16.606191] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:03.377 [2024-04-24 10:27:16.606206] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:03.377 10:27:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.377 10:27:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.377 10:27:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:04.755 10:27:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.755 10:27:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.755 10:27:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.755 10:27:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.755 10:27:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.755 10:27:17 -- common/autotest_common.sh@10 -- # set +x 00:32:04.755 10:27:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.755 10:27:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.755 10:27:17 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.756 10:27:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.756 10:27:17 -- common/autotest_common.sh@10 -- # set +x 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.756 10:27:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:04.756 10:27:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.692 [2024-04-24 10:27:18.615979] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:05.692 [2024-04-24 10:27:18.615997] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:05.692 [2024-04-24 10:27:18.616009] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:05.692 [2024-04-24 10:27:18.746416] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:05.692 10:27:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.692 10:27:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.692 10:27:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.692 10:27:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.692 10:27:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.692 10:27:18 -- common/autotest_common.sh@10 -- # set +x 00:32:05.692 10:27:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.692 10:27:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.692 10:27:18 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:05.692 10:27:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.951 [2024-04-24 10:27:18.972340] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:05.951 [2024-04-24 10:27:18.972374] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:05.951 [2024-04-24 10:27:18.972393] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:05.951 [2024-04-24 10:27:18.972405] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:05.951 [2024-04-24 10:27:18.972412] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:05.951 [2024-04-24 10:27:18.976124] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x8fef90 was disconnected and freed. delete nvme_qpair. 00:32:06.886 10:27:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.886 10:27:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.886 10:27:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.886 10:27:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.886 10:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.886 10:27:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.886 10:27:19 -- common/autotest_common.sh@10 -- # set +x 00:32:06.886 10:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.886 10:27:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:06.886 10:27:19 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:06.886 10:27:19 -- host/discovery_remove_ifc.sh@90 -- # killprocess 479134 00:32:06.886 10:27:19 -- common/autotest_common.sh@926 -- # '[' -z 479134 ']' 00:32:06.886 10:27:19 -- common/autotest_common.sh@930 -- # kill -0 479134 00:32:06.886 10:27:19 -- common/autotest_common.sh@931 -- # uname 00:32:06.886 10:27:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:06.886 10:27:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 479134 00:32:06.886 10:27:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:06.886 10:27:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:06.886 10:27:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 479134' 00:32:06.886 killing process with pid 479134 00:32:06.886 10:27:19 -- common/autotest_common.sh@945 -- # kill 479134 00:32:06.886 10:27:19 -- common/autotest_common.sh@950 -- # wait 479134 00:32:06.886 10:27:20 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:06.886 10:27:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:06.886 10:27:20 -- nvmf/common.sh@116 -- # sync 00:32:06.886 10:27:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:06.886 10:27:20 -- nvmf/common.sh@119 -- # set +e 00:32:06.886 10:27:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:06.886 10:27:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:06.886 rmmod nvme_tcp 00:32:07.145 rmmod nvme_fabrics 00:32:07.145 rmmod nvme_keyring 00:32:07.145 10:27:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:07.145 10:27:20 -- nvmf/common.sh@123 -- # set -e 00:32:07.145 10:27:20 -- nvmf/common.sh@124 -- # return 0 00:32:07.145 10:27:20 -- nvmf/common.sh@477 -- # '[' -n 478884 ']' 00:32:07.145 10:27:20 -- nvmf/common.sh@478 -- # killprocess 478884 00:32:07.145 10:27:20 -- common/autotest_common.sh@926 -- # '[' -z 478884 ']' 00:32:07.145 10:27:20 -- common/autotest_common.sh@930 -- # kill -0 478884 00:32:07.145 10:27:20 -- common/autotest_common.sh@931 -- # uname 00:32:07.145 10:27:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:07.145 10:27:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 478884 00:32:07.145 10:27:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:07.145 10:27:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:07.145 10:27:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 478884' 00:32:07.145 killing process with pid 478884 00:32:07.145 10:27:20 -- common/autotest_common.sh@945 -- # kill 478884 00:32:07.145 10:27:20 -- common/autotest_common.sh@950 -- # wait 478884 00:32:07.404 10:27:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:07.404 10:27:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:07.404 10:27:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:07.404 10:27:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:07.404 10:27:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:07.404 10:27:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.404 10:27:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:07.404 10:27:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.315 10:27:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:09.316 00:32:09.316 real 0m22.179s 00:32:09.316 user 0m27.931s 00:32:09.316 sys 0m5.234s 00:32:09.316 10:27:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.316 10:27:22 -- common/autotest_common.sh@10 -- # set +x 00:32:09.316 ************************************ 00:32:09.316 END TEST nvmf_discovery_remove_ifc 00:32:09.316 ************************************ 00:32:09.316 10:27:22 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:32:09.316 10:27:22 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:09.316 10:27:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:09.316 10:27:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:09.316 10:27:22 -- common/autotest_common.sh@10 -- # set +x 00:32:09.316 ************************************ 00:32:09.316 START TEST nvmf_digest 00:32:09.316 ************************************ 00:32:09.316 10:27:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:09.576 * Looking for test storage... 00:32:09.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:09.576 10:27:22 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.576 10:27:22 -- nvmf/common.sh@7 -- # uname -s 00:32:09.576 10:27:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.576 10:27:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.576 10:27:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.576 10:27:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.576 10:27:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.576 10:27:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.576 10:27:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.576 10:27:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.576 10:27:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.576 10:27:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.576 10:27:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:09.576 10:27:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:09.576 10:27:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.576 10:27:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.576 10:27:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.576 10:27:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.576 10:27:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.576 10:27:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.576 10:27:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.576 10:27:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.576 10:27:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.576 10:27:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.576 10:27:22 -- paths/export.sh@5 -- # export PATH 00:32:09.576 10:27:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.576 10:27:22 -- nvmf/common.sh@46 -- # : 0 00:32:09.576 10:27:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:09.576 10:27:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:09.576 10:27:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:09.576 10:27:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.576 10:27:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.576 10:27:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:09.576 10:27:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:09.576 10:27:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:09.576 10:27:22 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:09.576 10:27:22 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:09.576 10:27:22 -- host/digest.sh@16 -- # runtime=2 00:32:09.576 10:27:22 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:32:09.576 10:27:22 -- host/digest.sh@132 -- # nvmftestinit 00:32:09.576 10:27:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:09.576 10:27:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.576 10:27:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:09.576 10:27:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:09.576 10:27:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:09.576 10:27:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.576 10:27:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.576 10:27:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.576 10:27:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:09.576 10:27:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:09.576 10:27:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:09.576 10:27:22 -- common/autotest_common.sh@10 -- # set +x 00:32:14.851 10:27:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:14.851 10:27:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:14.851 10:27:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:14.851 10:27:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:14.851 10:27:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:14.851 10:27:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:14.851 10:27:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:14.851 10:27:27 -- nvmf/common.sh@294 -- # net_devs=() 00:32:14.851 10:27:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:14.851 10:27:27 -- nvmf/common.sh@295 -- # e810=() 00:32:14.851 10:27:27 -- nvmf/common.sh@295 -- # local -ga e810 00:32:14.851 10:27:27 -- nvmf/common.sh@296 -- # x722=() 00:32:14.851 10:27:27 -- nvmf/common.sh@296 -- # local -ga x722 00:32:14.851 10:27:27 -- nvmf/common.sh@297 -- # mlx=() 00:32:14.851 10:27:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:14.851 10:27:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.851 10:27:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:14.851 10:27:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:14.851 10:27:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:14.851 10:27:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:14.851 10:27:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:14.851 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:14.851 10:27:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:14.851 10:27:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:14.851 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:14.851 10:27:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:14.851 10:27:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:14.851 10:27:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.851 10:27:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:14.851 10:27:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.851 10:27:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:14.851 Found net devices under 0000:86:00.0: cvl_0_0 00:32:14.851 10:27:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.851 10:27:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:14.851 10:27:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.851 10:27:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:14.851 10:27:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.851 10:27:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:14.851 Found net devices under 0000:86:00.1: cvl_0_1 00:32:14.851 10:27:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.851 10:27:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:14.851 10:27:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:14.851 10:27:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:14.851 10:27:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.851 10:27:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.851 10:27:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.851 10:27:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:14.851 10:27:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.851 10:27:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.851 10:27:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:14.851 10:27:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.851 10:27:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.851 10:27:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:14.851 10:27:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:14.851 10:27:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.851 10:27:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:14.851 10:27:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:14.851 10:27:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.851 10:27:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:14.851 10:27:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.851 10:27:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:14.851 10:27:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:14.851 10:27:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:14.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:32:14.851 00:32:14.851 --- 10.0.0.2 ping statistics --- 00:32:14.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.851 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:32:14.851 10:27:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:14.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:32:14.851 00:32:14.851 --- 10.0.0.1 ping statistics --- 00:32:14.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.851 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:32:14.851 10:27:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.851 10:27:27 -- nvmf/common.sh@410 -- # return 0 00:32:14.851 10:27:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:14.851 10:27:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.851 10:27:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:14.851 10:27:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.851 10:27:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:14.851 10:27:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:14.851 10:27:27 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:14.851 10:27:27 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:32:14.851 10:27:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:14.852 10:27:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:14.852 10:27:27 -- common/autotest_common.sh@10 -- # set +x 00:32:14.852 ************************************ 00:32:14.852 START TEST nvmf_digest_clean 00:32:14.852 ************************************ 00:32:14.852 10:27:27 -- common/autotest_common.sh@1104 -- # run_digest 00:32:14.852 10:27:27 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:32:14.852 10:27:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:14.852 10:27:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:14.852 10:27:27 -- common/autotest_common.sh@10 -- # set +x 00:32:14.852 10:27:27 -- nvmf/common.sh@469 -- # nvmfpid=485028 00:32:14.852 10:27:27 -- nvmf/common.sh@470 -- # waitforlisten 485028 00:32:14.852 10:27:27 -- common/autotest_common.sh@819 -- # '[' -z 485028 ']' 00:32:14.852 10:27:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.852 10:27:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:14.852 10:27:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.852 10:27:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:14.852 10:27:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:14.852 10:27:27 -- common/autotest_common.sh@10 -- # set +x 00:32:14.852 [2024-04-24 10:27:27.635012] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:14.852 [2024-04-24 10:27:27.635057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.852 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.852 [2024-04-24 10:27:27.691425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.852 [2024-04-24 10:27:27.769646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:14.852 [2024-04-24 10:27:27.769748] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.852 [2024-04-24 10:27:27.769757] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.852 [2024-04-24 10:27:27.769764] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.852 [2024-04-24 10:27:27.769778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.420 10:27:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:15.420 10:27:28 -- common/autotest_common.sh@852 -- # return 0 00:32:15.420 10:27:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:15.420 10:27:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:15.420 10:27:28 -- common/autotest_common.sh@10 -- # set +x 00:32:15.420 10:27:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.420 10:27:28 -- host/digest.sh@120 -- # common_target_config 00:32:15.420 10:27:28 -- host/digest.sh@43 -- # rpc_cmd 00:32:15.420 10:27:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.420 10:27:28 -- common/autotest_common.sh@10 -- # set +x 00:32:15.420 null0 00:32:15.420 [2024-04-24 10:27:28.542406] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.420 [2024-04-24 10:27:28.566578] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.420 10:27:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.420 10:27:28 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:32:15.420 10:27:28 -- host/digest.sh@77 -- # local rw bs qd 00:32:15.420 10:27:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:15.420 10:27:28 -- host/digest.sh@80 -- # rw=randread 00:32:15.420 10:27:28 -- host/digest.sh@80 -- # bs=4096 00:32:15.420 10:27:28 -- host/digest.sh@80 -- # qd=128 00:32:15.420 10:27:28 -- host/digest.sh@82 -- # bperfpid=485274 00:32:15.420 10:27:28 -- host/digest.sh@83 -- # waitforlisten 485274 /var/tmp/bperf.sock 00:32:15.420 10:27:28 -- common/autotest_common.sh@819 -- # '[' -z 485274 ']' 00:32:15.420 10:27:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:15.420 10:27:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:15.420 10:27:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:15.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:15.420 10:27:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:15.420 10:27:28 -- common/autotest_common.sh@10 -- # set +x 00:32:15.420 10:27:28 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:15.420 [2024-04-24 10:27:28.609475] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:15.420 [2024-04-24 10:27:28.609516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485274 ] 00:32:15.420 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.420 [2024-04-24 10:27:28.662353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.679 [2024-04-24 10:27:28.733053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.247 10:27:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:16.247 10:27:29 -- common/autotest_common.sh@852 -- # return 0 00:32:16.247 10:27:29 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:16.247 10:27:29 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:16.247 10:27:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:16.505 10:27:29 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:16.505 10:27:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:16.764 nvme0n1 00:32:16.764 10:27:29 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:16.764 10:27:29 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:16.764 Running I/O for 2 seconds... 00:32:19.304 00:32:19.304 Latency(us) 00:32:19.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.304 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:19.304 nvme0n1 : 2.04 28439.40 111.09 0.00 0.00 4426.78 1780.87 43538.70 00:32:19.304 =================================================================================================================== 00:32:19.304 Total : 28439.40 111.09 0.00 0.00 4426.78 1780.87 43538.70 00:32:19.304 0 00:32:19.304 10:27:32 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:19.304 10:27:32 -- host/digest.sh@92 -- # get_accel_stats 00:32:19.304 10:27:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:19.304 10:27:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:19.304 | select(.opcode=="crc32c") 00:32:19.304 | "\(.module_name) \(.executed)"' 00:32:19.304 10:27:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:19.304 10:27:32 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:19.304 10:27:32 -- host/digest.sh@93 -- # exp_module=software 00:32:19.304 10:27:32 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:19.304 10:27:32 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:19.304 10:27:32 -- host/digest.sh@97 -- # killprocess 485274 00:32:19.304 10:27:32 -- common/autotest_common.sh@926 -- # '[' -z 485274 ']' 00:32:19.304 10:27:32 -- common/autotest_common.sh@930 -- # kill -0 485274 00:32:19.304 10:27:32 -- common/autotest_common.sh@931 -- # uname 00:32:19.304 10:27:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:19.304 10:27:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 485274 00:32:19.304 10:27:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:19.304 10:27:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:19.304 10:27:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 485274' 00:32:19.304 killing process with pid 485274 00:32:19.304 10:27:32 -- common/autotest_common.sh@945 -- # kill 485274 00:32:19.304 Received shutdown signal, test time was about 2.000000 seconds 00:32:19.304 00:32:19.304 Latency(us) 00:32:19.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.304 =================================================================================================================== 00:32:19.304 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.304 10:27:32 -- common/autotest_common.sh@950 -- # wait 485274 00:32:19.304 10:27:32 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:32:19.304 10:27:32 -- host/digest.sh@77 -- # local rw bs qd 00:32:19.304 10:27:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:19.304 10:27:32 -- host/digest.sh@80 -- # rw=randread 00:32:19.304 10:27:32 -- host/digest.sh@80 -- # bs=131072 00:32:19.304 10:27:32 -- host/digest.sh@80 -- # qd=16 00:32:19.304 10:27:32 -- host/digest.sh@82 -- # bperfpid=485971 00:32:19.304 10:27:32 -- host/digest.sh@83 -- # waitforlisten 485971 /var/tmp/bperf.sock 00:32:19.304 10:27:32 -- common/autotest_common.sh@819 -- # '[' -z 485971 ']' 00:32:19.304 10:27:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:19.304 10:27:32 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:19.304 10:27:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:19.304 10:27:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:19.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:19.304 10:27:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:19.304 10:27:32 -- common/autotest_common.sh@10 -- # set +x 00:32:19.304 [2024-04-24 10:27:32.474828] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:19.304 [2024-04-24 10:27:32.474876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485971 ] 00:32:19.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:19.304 Zero copy mechanism will not be used. 00:32:19.304 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.304 [2024-04-24 10:27:32.528372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.568 [2024-04-24 10:27:32.606455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.134 10:27:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:20.134 10:27:33 -- common/autotest_common.sh@852 -- # return 0 00:32:20.134 10:27:33 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:20.134 10:27:33 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:20.134 10:27:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:20.393 10:27:33 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:20.393 10:27:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:20.651 nvme0n1 00:32:20.651 10:27:33 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:20.651 10:27:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:20.910 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:20.910 Zero copy mechanism will not be used. 00:32:20.910 Running I/O for 2 seconds... 00:32:22.815 00:32:22.815 Latency(us) 00:32:22.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.815 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:22.815 nvme0n1 : 2.01 4242.89 530.36 0.00 0.00 3768.39 2863.64 8719.14 00:32:22.815 =================================================================================================================== 00:32:22.815 Total : 4242.89 530.36 0.00 0.00 3768.39 2863.64 8719.14 00:32:22.815 0 00:32:22.815 10:27:35 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:22.815 10:27:35 -- host/digest.sh@92 -- # get_accel_stats 00:32:22.815 10:27:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:22.815 10:27:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:22.815 | select(.opcode=="crc32c") 00:32:22.815 | "\(.module_name) \(.executed)"' 00:32:22.815 10:27:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:23.077 10:27:36 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:23.077 10:27:36 -- host/digest.sh@93 -- # exp_module=software 00:32:23.077 10:27:36 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:23.077 10:27:36 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:23.077 10:27:36 -- host/digest.sh@97 -- # killprocess 485971 00:32:23.077 10:27:36 -- common/autotest_common.sh@926 -- # '[' -z 485971 ']' 00:32:23.077 10:27:36 -- common/autotest_common.sh@930 -- # kill -0 485971 00:32:23.077 10:27:36 -- common/autotest_common.sh@931 -- # uname 00:32:23.077 10:27:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:23.077 10:27:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 485971 00:32:23.077 10:27:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:23.077 10:27:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:23.077 10:27:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 485971' 00:32:23.077 killing process with pid 485971 00:32:23.077 10:27:36 -- common/autotest_common.sh@945 -- # kill 485971 00:32:23.077 Received shutdown signal, test time was about 2.000000 seconds 00:32:23.077 00:32:23.077 Latency(us) 00:32:23.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.077 =================================================================================================================== 00:32:23.077 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:23.077 10:27:36 -- common/autotest_common.sh@950 -- # wait 485971 00:32:23.360 10:27:36 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:32:23.360 10:27:36 -- host/digest.sh@77 -- # local rw bs qd 00:32:23.360 10:27:36 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:23.360 10:27:36 -- host/digest.sh@80 -- # rw=randwrite 00:32:23.360 10:27:36 -- host/digest.sh@80 -- # bs=4096 00:32:23.360 10:27:36 -- host/digest.sh@80 -- # qd=128 00:32:23.360 10:27:36 -- host/digest.sh@82 -- # bperfpid=486499 00:32:23.360 10:27:36 -- host/digest.sh@83 -- # waitforlisten 486499 /var/tmp/bperf.sock 00:32:23.360 10:27:36 -- common/autotest_common.sh@819 -- # '[' -z 486499 ']' 00:32:23.360 10:27:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:23.360 10:27:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:23.360 10:27:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:23.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:23.360 10:27:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:23.360 10:27:36 -- common/autotest_common.sh@10 -- # set +x 00:32:23.360 10:27:36 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:23.360 [2024-04-24 10:27:36.453756] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:23.360 [2024-04-24 10:27:36.453806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486499 ] 00:32:23.360 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.360 [2024-04-24 10:27:36.508792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.360 [2024-04-24 10:27:36.578894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.961 10:27:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:23.961 10:27:37 -- common/autotest_common.sh@852 -- # return 0 00:32:23.961 10:27:37 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:23.961 10:27:37 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:24.221 10:27:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:24.221 10:27:37 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.221 10:27:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.789 nvme0n1 00:32:24.789 10:27:37 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:24.789 10:27:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:24.789 Running I/O for 2 seconds... 00:32:26.694 00:32:26.695 Latency(us) 00:32:26.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.695 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.695 nvme0n1 : 2.00 27831.19 108.72 0.00 0.00 4591.49 3547.49 15158.76 00:32:26.695 =================================================================================================================== 00:32:26.695 Total : 27831.19 108.72 0.00 0.00 4591.49 3547.49 15158.76 00:32:26.695 0 00:32:26.695 10:27:39 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:26.695 10:27:39 -- host/digest.sh@92 -- # get_accel_stats 00:32:26.695 10:27:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:26.695 10:27:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:26.695 | select(.opcode=="crc32c") 00:32:26.695 | "\(.module_name) \(.executed)"' 00:32:26.695 10:27:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:26.954 10:27:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:26.954 10:27:40 -- host/digest.sh@93 -- # exp_module=software 00:32:26.954 10:27:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:26.954 10:27:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:26.954 10:27:40 -- host/digest.sh@97 -- # killprocess 486499 00:32:26.954 10:27:40 -- common/autotest_common.sh@926 -- # '[' -z 486499 ']' 00:32:26.954 10:27:40 -- common/autotest_common.sh@930 -- # kill -0 486499 00:32:26.954 10:27:40 -- common/autotest_common.sh@931 -- # uname 00:32:26.954 10:27:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:26.954 10:27:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 486499 00:32:26.954 10:27:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:26.954 10:27:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:26.954 10:27:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 486499' 00:32:26.954 killing process with pid 486499 00:32:26.954 10:27:40 -- common/autotest_common.sh@945 -- # kill 486499 00:32:26.954 Received shutdown signal, test time was about 2.000000 seconds 00:32:26.954 00:32:26.954 Latency(us) 00:32:26.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.954 =================================================================================================================== 00:32:26.954 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:26.954 10:27:40 -- common/autotest_common.sh@950 -- # wait 486499 00:32:27.212 10:27:40 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:32:27.212 10:27:40 -- host/digest.sh@77 -- # local rw bs qd 00:32:27.212 10:27:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:27.212 10:27:40 -- host/digest.sh@80 -- # rw=randwrite 00:32:27.212 10:27:40 -- host/digest.sh@80 -- # bs=131072 00:32:27.212 10:27:40 -- host/digest.sh@80 -- # qd=16 00:32:27.212 10:27:40 -- host/digest.sh@82 -- # bperfpid=487177 00:32:27.212 10:27:40 -- host/digest.sh@83 -- # waitforlisten 487177 /var/tmp/bperf.sock 00:32:27.212 10:27:40 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:27.212 10:27:40 -- common/autotest_common.sh@819 -- # '[' -z 487177 ']' 00:32:27.212 10:27:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:27.212 10:27:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:27.212 10:27:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:27.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:27.212 10:27:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:27.212 10:27:40 -- common/autotest_common.sh@10 -- # set +x 00:32:27.213 [2024-04-24 10:27:40.430058] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:27.213 [2024-04-24 10:27:40.430111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487177 ] 00:32:27.213 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:27.213 Zero copy mechanism will not be used. 00:32:27.213 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.213 [2024-04-24 10:27:40.484359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.472 [2024-04-24 10:27:40.552274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.039 10:27:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:28.040 10:27:41 -- common/autotest_common.sh@852 -- # return 0 00:32:28.040 10:27:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:28.040 10:27:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:28.040 10:27:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:28.298 10:27:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:28.298 10:27:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:28.557 nvme0n1 00:32:28.557 10:27:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:28.557 10:27:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:28.815 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:28.815 Zero copy mechanism will not be used. 00:32:28.815 Running I/O for 2 seconds... 00:32:30.720 00:32:30.720 Latency(us) 00:32:30.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:30.720 nvme0n1 : 2.00 5536.66 692.08 0.00 0.00 2885.26 1866.35 15728.64 00:32:30.720 =================================================================================================================== 00:32:30.720 Total : 5536.66 692.08 0.00 0.00 2885.26 1866.35 15728.64 00:32:30.720 0 00:32:30.720 10:27:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:30.720 10:27:43 -- host/digest.sh@92 -- # get_accel_stats 00:32:30.720 10:27:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:30.720 10:27:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:30.720 | select(.opcode=="crc32c") 00:32:30.720 | "\(.module_name) \(.executed)"' 00:32:30.720 10:27:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:30.979 10:27:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:30.979 10:27:44 -- host/digest.sh@93 -- # exp_module=software 00:32:30.979 10:27:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:30.979 10:27:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:30.979 10:27:44 -- host/digest.sh@97 -- # killprocess 487177 00:32:30.979 10:27:44 -- common/autotest_common.sh@926 -- # '[' -z 487177 ']' 00:32:30.979 10:27:44 -- common/autotest_common.sh@930 -- # kill -0 487177 00:32:30.979 10:27:44 -- common/autotest_common.sh@931 -- # uname 00:32:30.979 10:27:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:30.979 10:27:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 487177 00:32:30.979 10:27:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:30.979 10:27:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:30.979 10:27:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 487177' 00:32:30.979 killing process with pid 487177 00:32:30.979 10:27:44 -- common/autotest_common.sh@945 -- # kill 487177 00:32:30.979 Received shutdown signal, test time was about 2.000000 seconds 00:32:30.979 00:32:30.979 Latency(us) 00:32:30.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.979 =================================================================================================================== 00:32:30.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.979 10:27:44 -- common/autotest_common.sh@950 -- # wait 487177 00:32:31.238 10:27:44 -- host/digest.sh@126 -- # killprocess 485028 00:32:31.238 10:27:44 -- common/autotest_common.sh@926 -- # '[' -z 485028 ']' 00:32:31.238 10:27:44 -- common/autotest_common.sh@930 -- # kill -0 485028 00:32:31.238 10:27:44 -- common/autotest_common.sh@931 -- # uname 00:32:31.238 10:27:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:31.238 10:27:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 485028 00:32:31.238 10:27:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:31.238 10:27:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:31.238 10:27:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 485028' 00:32:31.238 killing process with pid 485028 00:32:31.238 10:27:44 -- common/autotest_common.sh@945 -- # kill 485028 00:32:31.238 10:27:44 -- common/autotest_common.sh@950 -- # wait 485028 00:32:31.497 00:32:31.497 real 0m16.992s 00:32:31.497 user 0m32.528s 00:32:31.497 sys 0m4.276s 00:32:31.497 10:27:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:31.497 10:27:44 -- common/autotest_common.sh@10 -- # set +x 00:32:31.497 ************************************ 00:32:31.497 END TEST nvmf_digest_clean 00:32:31.497 ************************************ 00:32:31.497 10:27:44 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:32:31.497 10:27:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:31.497 10:27:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:31.497 10:27:44 -- common/autotest_common.sh@10 -- # set +x 00:32:31.497 ************************************ 00:32:31.497 START TEST nvmf_digest_error 00:32:31.497 ************************************ 00:32:31.497 10:27:44 -- common/autotest_common.sh@1104 -- # run_digest_error 00:32:31.497 10:27:44 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:32:31.497 10:27:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:31.497 10:27:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:31.497 10:27:44 -- common/autotest_common.sh@10 -- # set +x 00:32:31.497 10:27:44 -- nvmf/common.sh@469 -- # nvmfpid=487914 00:32:31.497 10:27:44 -- nvmf/common.sh@470 -- # waitforlisten 487914 00:32:31.497 10:27:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:31.497 10:27:44 -- common/autotest_common.sh@819 -- # '[' -z 487914 ']' 00:32:31.498 10:27:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.498 10:27:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:31.498 10:27:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.498 10:27:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:31.498 10:27:44 -- common/autotest_common.sh@10 -- # set +x 00:32:31.498 [2024-04-24 10:27:44.671332] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:31.498 [2024-04-24 10:27:44.671381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.498 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.498 [2024-04-24 10:27:44.728287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.756 [2024-04-24 10:27:44.806168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:31.756 [2024-04-24 10:27:44.806274] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.756 [2024-04-24 10:27:44.806282] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.756 [2024-04-24 10:27:44.806288] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.756 [2024-04-24 10:27:44.806308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.322 10:27:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:32.322 10:27:45 -- common/autotest_common.sh@852 -- # return 0 00:32:32.322 10:27:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:32.322 10:27:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:32.322 10:27:45 -- common/autotest_common.sh@10 -- # set +x 00:32:32.322 10:27:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.322 10:27:45 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:32.322 10:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:32.322 10:27:45 -- common/autotest_common.sh@10 -- # set +x 00:32:32.322 [2024-04-24 10:27:45.508356] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:32.322 10:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:32.322 10:27:45 -- host/digest.sh@104 -- # common_target_config 00:32:32.322 10:27:45 -- host/digest.sh@43 -- # rpc_cmd 00:32:32.322 10:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:32.322 10:27:45 -- common/autotest_common.sh@10 -- # set +x 00:32:32.322 null0 00:32:32.322 [2024-04-24 10:27:45.596841] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.581 [2024-04-24 10:27:45.621017] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.581 10:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:32.581 10:27:45 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:32:32.581 10:27:45 -- host/digest.sh@54 -- # local rw bs qd 00:32:32.581 10:27:45 -- host/digest.sh@56 -- # rw=randread 00:32:32.581 10:27:45 -- host/digest.sh@56 -- # bs=4096 00:32:32.581 10:27:45 -- host/digest.sh@56 -- # qd=128 00:32:32.581 10:27:45 -- host/digest.sh@58 -- # bperfpid=488160 00:32:32.581 10:27:45 -- host/digest.sh@60 -- # waitforlisten 488160 /var/tmp/bperf.sock 00:32:32.581 10:27:45 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:32.581 10:27:45 -- common/autotest_common.sh@819 -- # '[' -z 488160 ']' 00:32:32.581 10:27:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:32.581 10:27:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:32.581 10:27:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:32.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:32.581 10:27:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:32.581 10:27:45 -- common/autotest_common.sh@10 -- # set +x 00:32:32.581 [2024-04-24 10:27:45.668566] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:32.581 [2024-04-24 10:27:45.668605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488160 ] 00:32:32.581 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.581 [2024-04-24 10:27:45.721553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.581 [2024-04-24 10:27:45.799186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.518 10:27:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:33.518 10:27:46 -- common/autotest_common.sh@852 -- # return 0 00:32:33.518 10:27:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:33.518 10:27:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:33.518 10:27:46 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:33.518 10:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:33.518 10:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:33.518 10:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:33.518 10:27:46 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:33.518 10:27:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:33.777 nvme0n1 00:32:33.777 10:27:46 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:33.777 10:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:33.777 10:27:46 -- common/autotest_common.sh@10 -- # set +x 00:32:33.777 10:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:33.777 10:27:46 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:33.777 10:27:46 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:33.777 Running I/O for 2 seconds... 00:32:33.777 [2024-04-24 10:27:46.995274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:33.777 [2024-04-24 10:27:46.995307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.777 [2024-04-24 10:27:46.995317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.777 [2024-04-24 10:27:47.008801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:33.777 [2024-04-24 10:27:47.008825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.777 [2024-04-24 10:27:47.008834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.777 [2024-04-24 10:27:47.022113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:33.777 [2024-04-24 10:27:47.022134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.777 [2024-04-24 10:27:47.022143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.777 [2024-04-24 10:27:47.030061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:33.777 [2024-04-24 10:27:47.030086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.777 [2024-04-24 10:27:47.030095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.777 [2024-04-24 10:27:47.039888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:33.777 [2024-04-24 10:27:47.039909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.777 [2024-04-24 10:27:47.039918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.777 [2024-04-24 10:27:47.049125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:33.777 [2024-04-24 10:27:47.049146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.777 [2024-04-24 10:27:47.049155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.058093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.058113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.058121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.066765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.066784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.066793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.075969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.075989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.075997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.084589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.084608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.084616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.093086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.093105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.093113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.101699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.101719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.101731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.110572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.110591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.110600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.119334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.119354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.119361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.127860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.127880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.127888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.137119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.137139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.137148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.145577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.145597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.145605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.154227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.154246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.154254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.163260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.163280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.163288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.171803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.171823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.171831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.180488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.180512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.180520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.189774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.189794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.189802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.198212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.198231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.198239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.206841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.206861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.206869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.215758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.215777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.215785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.224519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.224539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.224547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.233034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.233054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.233062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.241905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.241925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.241933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.250437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.250457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.250464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.259210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.259230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.259238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.268379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.268398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.268406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.276881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.038 [2024-04-24 10:27:47.276901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.038 [2024-04-24 10:27:47.276909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.038 [2024-04-24 10:27:47.285520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.039 [2024-04-24 10:27:47.285539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.039 [2024-04-24 10:27:47.285548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.039 [2024-04-24 10:27:47.294451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.039 [2024-04-24 10:27:47.294470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.039 [2024-04-24 10:27:47.294478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.039 [2024-04-24 10:27:47.302984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.039 [2024-04-24 10:27:47.303004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.039 [2024-04-24 10:27:47.303012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.039 [2024-04-24 10:27:47.311631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.039 [2024-04-24 10:27:47.311650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.039 [2024-04-24 10:27:47.311658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.321112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.321131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.321140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.329623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.329643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.329654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.338327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.338346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.338353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.347233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.347253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.347261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.355775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.355794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.355802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.364421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.364441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.364449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.373608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.373628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.373636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.382083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.382104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.382112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.390748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.390767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.390776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.399754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.399774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.399782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.408215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.408234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.408242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.416843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.416863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.416871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.426184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.426204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.426212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.434632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.434650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.434658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.443245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.443265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.443273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.452169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.452189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.452197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.460714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.460734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.460742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.469337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.469357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.469366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.478446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.478466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.478477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.487075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.487095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.487103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.495500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.495519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.495527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.504736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.504755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.504763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.513479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.513499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.513507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.299 [2024-04-24 10:27:47.521892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.299 [2024-04-24 10:27:47.521911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.299 [2024-04-24 10:27:47.521919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.300 [2024-04-24 10:27:47.530593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.300 [2024-04-24 10:27:47.530613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.300 [2024-04-24 10:27:47.530621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.300 [2024-04-24 10:27:47.539694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.300 [2024-04-24 10:27:47.539715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.300 [2024-04-24 10:27:47.539723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.300 [2024-04-24 10:27:47.548179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.300 [2024-04-24 10:27:47.548199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.300 [2024-04-24 10:27:47.548207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.300 [2024-04-24 10:27:47.556656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.300 [2024-04-24 10:27:47.556679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.300 [2024-04-24 10:27:47.556686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.300 [2024-04-24 10:27:47.565802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.300 [2024-04-24 10:27:47.565821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.300 [2024-04-24 10:27:47.565829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.300 [2024-04-24 10:27:47.574462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.300 [2024-04-24 10:27:47.574482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.300 [2024-04-24 10:27:47.574490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.583187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.583206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.583214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.592372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.592391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.592399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.600955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.600975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.600982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.609371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.609391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.609399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.618618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.618638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.618646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.627230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.627250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.627258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.635860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.635879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.635887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.644513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.644533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.644541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.653852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.653872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.653880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.662565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.662587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.662595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.672083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.672121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.672130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.680784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.680804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.680811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.689222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.689242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.689250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.697846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.697866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.697874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.706877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.706897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.706909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.715318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.715337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.715345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.723940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.723960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.723968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.733164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.733185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.733193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.741615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.741635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.560 [2024-04-24 10:27:47.741642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.560 [2024-04-24 10:27:47.750251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.560 [2024-04-24 10:27:47.750270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.750278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.561 [2024-04-24 10:27:47.759177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.561 [2024-04-24 10:27:47.759198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.759206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.561 [2024-04-24 10:27:47.768034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.561 [2024-04-24 10:27:47.768055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.768063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.561 [2024-04-24 10:27:47.776629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.561 [2024-04-24 10:27:47.776650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.776658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.561 [2024-04-24 10:27:47.785596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.561 [2024-04-24 10:27:47.785617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.785625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.561 [2024-04-24 10:27:47.794473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.561 [2024-04-24 10:27:47.794494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.794502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.561 [2024-04-24 10:27:47.803205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.561 [2024-04-24 10:27:47.803224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.803232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.561 [2024-04-24 10:27:47.812241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.561 [2024-04-24 10:27:47.812260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.812268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.561 [2024-04-24 10:27:47.820800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.561 [2024-04-24 10:27:47.820820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.820828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.561 [2024-04-24 10:27:47.829253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.561 [2024-04-24 10:27:47.829273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.561 [2024-04-24 10:27:47.829281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.838769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.838790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.838798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.847342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.847362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.847370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.855953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.855974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.855985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.864521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.864542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.864549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.873711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.873730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.873739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.882187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.882207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.882216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.890819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.890839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.890847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.899955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.899977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.899986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.908474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.908496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.908504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.917581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.917602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.917610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.925967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.925987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.925996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.934120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.934144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.934152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.943861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.943881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.943889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.952825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.952845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.952853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.961842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.961862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.961870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.970434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.970455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.970462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.979097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.979118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.979126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.987955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.987975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.987984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:47.996673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:47.996693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:47.996701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:48.005147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:48.005168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:48.005176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:48.013824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:48.013845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:48.013854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:48.023297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:48.023317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:48.023325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:48.031981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:48.032002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:48.032010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:48.041012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:48.041033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:48.041041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:48.049747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:48.049767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:48.049776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:48.058641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.821 [2024-04-24 10:27:48.058661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.821 [2024-04-24 10:27:48.058669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.821 [2024-04-24 10:27:48.067848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.822 [2024-04-24 10:27:48.067869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.822 [2024-04-24 10:27:48.067877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.822 [2024-04-24 10:27:48.076610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.822 [2024-04-24 10:27:48.076630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.822 [2024-04-24 10:27:48.076638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.822 [2024-04-24 10:27:48.085212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.822 [2024-04-24 10:27:48.085233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.822 [2024-04-24 10:27:48.085246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.822 [2024-04-24 10:27:48.094534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:34.822 [2024-04-24 10:27:48.094554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.822 [2024-04-24 10:27:48.094562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.081 [2024-04-24 10:27:48.103256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.081 [2024-04-24 10:27:48.103276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.081 [2024-04-24 10:27:48.103284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.081 [2024-04-24 10:27:48.111986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.081 [2024-04-24 10:27:48.112006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.081 [2024-04-24 10:27:48.112014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.081 [2024-04-24 10:27:48.121314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.081 [2024-04-24 10:27:48.121335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.081 [2024-04-24 10:27:48.121342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.081 [2024-04-24 10:27:48.129868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.081 [2024-04-24 10:27:48.129888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.129896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.138395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.138415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.138423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.147128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.147148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.147156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.156231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.156251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.156259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.164907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.164931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.164939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.173225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.173245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.173253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.182512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.182532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.182540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.190968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.190987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.190995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.199601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.199620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.199628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.208608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.208628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.208636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.217219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.217239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.217247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.225799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.225819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.225827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.235005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.235025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.235032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.243463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.243482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.243490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.252064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.252089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.252098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.261208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.261229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.261238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.269637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.269657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.269665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.278349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.278369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.278377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.287120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.287140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.287148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.296158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.296177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.296185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.304691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.304711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.304719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.313425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.313448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.313457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.322552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.322572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.322580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.331021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.331042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.331049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.339570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.339590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.082 [2024-04-24 10:27:48.339598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.082 [2024-04-24 10:27:48.348244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.082 [2024-04-24 10:27:48.348264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.089 [2024-04-24 10:27:48.348272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.089 [2024-04-24 10:27:48.357366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.089 [2024-04-24 10:27:48.357386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.089 [2024-04-24 10:27:48.357394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.349 [2024-04-24 10:27:48.366119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.349 [2024-04-24 10:27:48.366138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.349 [2024-04-24 10:27:48.366146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.349 [2024-04-24 10:27:48.374698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.349 [2024-04-24 10:27:48.374718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.349 [2024-04-24 10:27:48.374725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.349 [2024-04-24 10:27:48.384075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.349 [2024-04-24 10:27:48.384095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.349 [2024-04-24 10:27:48.384103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.349 [2024-04-24 10:27:48.392494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.349 [2024-04-24 10:27:48.392514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.349 [2024-04-24 10:27:48.392521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.349 [2024-04-24 10:27:48.401109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.349 [2024-04-24 10:27:48.401129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.349 [2024-04-24 10:27:48.401137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.349 [2024-04-24 10:27:48.409715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.349 [2024-04-24 10:27:48.409735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.409743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.418933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.418952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.418960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.427591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.427612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.427620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.436207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.436226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.436235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.444725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.444745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.444753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.453847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.453866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.453874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.462471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.462490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.462501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.471073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.471094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.471102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.480386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.480405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.480413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.488932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.488952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.488960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.497342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.497363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.497371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.506110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.506130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.506137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.515239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.515259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.515267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.523679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.523699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.523707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.532680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.532701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.532710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.541748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.541772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.541780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.550319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.550340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.550347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.559347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.559368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.559376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.568133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.568153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.568161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.576314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.576335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.576342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.585493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.585513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.585520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.594166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.594186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.594194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.602737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.602756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.602764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.611216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.611236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.611244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.350 [2024-04-24 10:27:48.620448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.350 [2024-04-24 10:27:48.620469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.350 [2024-04-24 10:27:48.620476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.610 [2024-04-24 10:27:48.629135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.610 [2024-04-24 10:27:48.629154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.610 [2024-04-24 10:27:48.629162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.610 [2024-04-24 10:27:48.637893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.610 [2024-04-24 10:27:48.637912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.610 [2024-04-24 10:27:48.637920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.610 [2024-04-24 10:27:48.647027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.610 [2024-04-24 10:27:48.647047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.610 [2024-04-24 10:27:48.647055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.610 [2024-04-24 10:27:48.655703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.610 [2024-04-24 10:27:48.655723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.610 [2024-04-24 10:27:48.655731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.610 [2024-04-24 10:27:48.664259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.610 [2024-04-24 10:27:48.664278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.664286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.673556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.673575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.673583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.682270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.682291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.682299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.690569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.690589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.690601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.699177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.699197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.699205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.708263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.708283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.708291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.716789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.716808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.716816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.725355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.725375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.725384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.734765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.734785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.734793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.743193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.743213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.743220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.751827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.751846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.751854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.760091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.760110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.760118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.769431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.769451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.769459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.777899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.777919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.777927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.786761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.786780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.786788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.795866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.795886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.795894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.804299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.804318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.804326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.812718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.812737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.812745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.821442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.821462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.821470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.830551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.830571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.830579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.839097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.839117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.839128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.847773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.847792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.847800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.856851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.856870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.856879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.865408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.865427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.865435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.873878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.873898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.873905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.611 [2024-04-24 10:27:48.883204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.611 [2024-04-24 10:27:48.883224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.611 [2024-04-24 10:27:48.883232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.892108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.892128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.892136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.900739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.900758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.900766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.909938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.909958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.909966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.918471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.918492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.918500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.927123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.927143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.927151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.935863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.935882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.935891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.944911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.944931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.944939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.953256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.953275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.953283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.961831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.961850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.961858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.970970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.970990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.970998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 [2024-04-24 10:27:48.979537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20749c0) 00:32:35.871 [2024-04-24 10:27:48.979558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.871 [2024-04-24 10:27:48.979566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.871 00:32:35.871 Latency(us) 00:32:35.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.871 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:35.871 nvme0n1 : 2.00 28875.45 112.79 0.00 0.00 4428.45 2137.04 13905.03 00:32:35.871 =================================================================================================================== 00:32:35.871 Total : 28875.45 112.79 0.00 0.00 4428.45 2137.04 13905.03 00:32:35.871 0 00:32:35.871 10:27:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:35.872 10:27:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:35.872 10:27:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:35.872 | .driver_specific 00:32:35.872 | .nvme_error 00:32:35.872 | .status_code 00:32:35.872 | .command_transient_transport_error' 00:32:35.872 10:27:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:36.131 10:27:49 -- host/digest.sh@71 -- # (( 226 > 0 )) 00:32:36.131 10:27:49 -- host/digest.sh@73 -- # killprocess 488160 00:32:36.131 10:27:49 -- common/autotest_common.sh@926 -- # '[' -z 488160 ']' 00:32:36.131 10:27:49 -- common/autotest_common.sh@930 -- # kill -0 488160 00:32:36.131 10:27:49 -- common/autotest_common.sh@931 -- # uname 00:32:36.131 10:27:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:36.131 10:27:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 488160 00:32:36.131 10:27:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:36.131 10:27:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:36.131 10:27:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 488160' 00:32:36.131 killing process with pid 488160 00:32:36.131 10:27:49 -- common/autotest_common.sh@945 -- # kill 488160 00:32:36.131 Received shutdown signal, test time was about 2.000000 seconds 00:32:36.131 00:32:36.131 Latency(us) 00:32:36.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.131 =================================================================================================================== 00:32:36.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:36.131 10:27:49 -- common/autotest_common.sh@950 -- # wait 488160 00:32:36.390 10:27:49 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:32:36.390 10:27:49 -- host/digest.sh@54 -- # local rw bs qd 00:32:36.390 10:27:49 -- host/digest.sh@56 -- # rw=randread 00:32:36.390 10:27:49 -- host/digest.sh@56 -- # bs=131072 00:32:36.390 10:27:49 -- host/digest.sh@56 -- # qd=16 00:32:36.390 10:27:49 -- host/digest.sh@58 -- # bperfpid=488786 00:32:36.390 10:27:49 -- host/digest.sh@60 -- # waitforlisten 488786 /var/tmp/bperf.sock 00:32:36.390 10:27:49 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:36.390 10:27:49 -- common/autotest_common.sh@819 -- # '[' -z 488786 ']' 00:32:36.391 10:27:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:36.391 10:27:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:36.391 10:27:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:36.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:36.391 10:27:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:36.391 10:27:49 -- common/autotest_common.sh@10 -- # set +x 00:32:36.391 [2024-04-24 10:27:49.476489] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:36.391 [2024-04-24 10:27:49.476537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488786 ] 00:32:36.391 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:36.391 Zero copy mechanism will not be used. 00:32:36.391 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.391 [2024-04-24 10:27:49.529796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.391 [2024-04-24 10:27:49.607262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.325 10:27:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:37.325 10:27:50 -- common/autotest_common.sh@852 -- # return 0 00:32:37.325 10:27:50 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:37.325 10:27:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:37.325 10:27:50 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:37.325 10:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.325 10:27:50 -- common/autotest_common.sh@10 -- # set +x 00:32:37.325 10:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.325 10:27:50 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:37.325 10:27:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:37.584 nvme0n1 00:32:37.584 10:27:50 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:37.584 10:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.584 10:27:50 -- common/autotest_common.sh@10 -- # set +x 00:32:37.584 10:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.584 10:27:50 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:37.584 10:27:50 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:37.584 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:37.584 Zero copy mechanism will not be used. 00:32:37.584 Running I/O for 2 seconds... 00:32:37.584 [2024-04-24 10:27:50.834305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.584 [2024-04-24 10:27:50.834339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.584 [2024-04-24 10:27:50.834349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.584 [2024-04-24 10:27:50.846509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.584 [2024-04-24 10:27:50.846534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.584 [2024-04-24 10:27:50.846543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.584 [2024-04-24 10:27:50.855718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.584 [2024-04-24 10:27:50.855740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.584 [2024-04-24 10:27:50.855748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.864418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.864442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.864450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.873478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.873500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.873508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.882615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.882637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.882650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.892128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.892151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.892160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.901784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.901806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.901814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.911935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.911956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.911965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.923045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.923068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.923083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.933790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.933813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.933821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.944754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.944776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.944784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.954589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.954611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.954619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.963243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.963263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.963271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.971112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.971132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.971140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.978663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.978683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.978691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.985289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.985309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.985318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.991764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.991784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.991792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:50.997957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:50.997977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:50.997985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:51.004180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:51.004200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:51.004209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:51.010454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:51.010474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:51.010482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:51.016679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:51.016699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:51.016707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:51.022866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:51.022887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:51.022898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:51.029090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:51.029111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:51.029118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:51.035783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.844 [2024-04-24 10:27:51.035804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.844 [2024-04-24 10:27:51.035813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.844 [2024-04-24 10:27:51.044920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.845 [2024-04-24 10:27:51.044940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.845 [2024-04-24 10:27:51.044948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.845 [2024-04-24 10:27:51.056543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.845 [2024-04-24 10:27:51.056563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.845 [2024-04-24 10:27:51.056571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.845 [2024-04-24 10:27:51.066094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.845 [2024-04-24 10:27:51.066113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.845 [2024-04-24 10:27:51.066121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.845 [2024-04-24 10:27:51.074946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.845 [2024-04-24 10:27:51.074972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.845 [2024-04-24 10:27:51.074979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.845 [2024-04-24 10:27:51.082908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.845 [2024-04-24 10:27:51.082928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.845 [2024-04-24 10:27:51.082936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.845 [2024-04-24 10:27:51.093067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.845 [2024-04-24 10:27:51.093092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.845 [2024-04-24 10:27:51.093100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.845 [2024-04-24 10:27:51.101039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.845 [2024-04-24 10:27:51.101063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.845 [2024-04-24 10:27:51.101076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.845 [2024-04-24 10:27:51.109123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:37.845 [2024-04-24 10:27:51.109144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.845 [2024-04-24 10:27:51.109152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.121327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.121348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.121356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.131894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.131915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.131923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.142666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.142689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.142697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.152793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.152814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.152823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.162415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.162437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.162445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.171226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.171248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.171256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.178433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.178454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.178462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.185469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.185490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.185497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.192753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.192773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.192781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.199958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.199979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.199987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.206469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.206489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.206498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.213650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.213670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.213678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.219981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.220001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.220009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.225927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.225948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.225956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.231808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.231829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.231838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.237918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.237940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.237953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.244567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.244588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.244597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.105 [2024-04-24 10:27:51.251689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.105 [2024-04-24 10:27:51.251711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-04-24 10:27:51.251719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.258210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.258231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.258239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.264399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.264420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.264428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.270585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.270606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.270614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.276829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.276850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.276858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.283469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.283490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.283499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.289660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.289681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.289689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.295826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.295848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.295855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.301988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.302007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.302015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.308283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.308304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.308312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.314502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.314524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.314531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.320765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.320787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.320794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.327058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.327086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.327094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.333425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.333446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.333454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.338854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.338874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.338882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.343801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.343821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.343833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.348784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.348805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.348813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.353710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.353730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.353738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.358655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.358676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.358684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.363670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.363690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.363697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.368666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.368686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.368694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.373760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.373781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.373789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.106 [2024-04-24 10:27:51.379636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.106 [2024-04-24 10:27:51.379657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.106 [2024-04-24 10:27:51.379666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.366 [2024-04-24 10:27:51.385913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.366 [2024-04-24 10:27:51.385934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.366 [2024-04-24 10:27:51.385942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.366 [2024-04-24 10:27:51.392134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.366 [2024-04-24 10:27:51.392158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.366 [2024-04-24 10:27:51.392167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.366 [2024-04-24 10:27:51.398347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.366 [2024-04-24 10:27:51.398366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.366 [2024-04-24 10:27:51.398374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.366 [2024-04-24 10:27:51.404556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.366 [2024-04-24 10:27:51.404577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.366 [2024-04-24 10:27:51.404585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.366 [2024-04-24 10:27:51.410957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.366 [2024-04-24 10:27:51.410978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.366 [2024-04-24 10:27:51.410987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.416431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.416453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.416461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.421753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.421774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.421782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.427365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.427386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.427394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.433547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.433568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.433576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.439719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.439740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.439748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.445943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.445963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.445972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.452052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.452077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.452085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.458320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.458341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.458348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.464630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.464651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.464659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.470925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.470946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.470954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.477117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.477138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.477146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.482274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.482296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.482304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.488200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.488221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.488229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.494837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.494858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.494870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.501123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.501144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.501151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.507442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.507462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.507470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.513559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.513578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.513587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.519667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.519688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.519696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.525804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.525825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.525833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.531951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.531972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.531980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.538135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.538156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.538164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.544342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.544363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.544371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.550644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.550667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.550675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.556857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.556877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.556885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.562958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.562988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.568807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.568829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.568838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.367 [2024-04-24 10:27:51.574554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.367 [2024-04-24 10:27:51.574576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.367 [2024-04-24 10:27:51.574584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.368 [2024-04-24 10:27:51.580700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.368 [2024-04-24 10:27:51.580721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.368 [2024-04-24 10:27:51.580729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.368 [2024-04-24 10:27:51.586852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.368 [2024-04-24 10:27:51.586874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.368 [2024-04-24 10:27:51.586882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.368 [2024-04-24 10:27:51.592981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.368 [2024-04-24 10:27:51.593002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.368 [2024-04-24 10:27:51.593009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.368 [2024-04-24 10:27:51.599778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.368 [2024-04-24 10:27:51.599800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.368 [2024-04-24 10:27:51.599808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.368 [2024-04-24 10:27:51.607927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.368 [2024-04-24 10:27:51.607948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.368 [2024-04-24 10:27:51.607956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.368 [2024-04-24 10:27:51.615231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.368 [2024-04-24 10:27:51.615253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.368 [2024-04-24 10:27:51.615261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.368 [2024-04-24 10:27:51.622410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.368 [2024-04-24 10:27:51.622432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.368 [2024-04-24 10:27:51.622440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.368 [2024-04-24 10:27:51.629830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.368 [2024-04-24 10:27:51.629851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.368 [2024-04-24 10:27:51.629859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.368 [2024-04-24 10:27:51.638583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.368 [2024-04-24 10:27:51.638605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.368 [2024-04-24 10:27:51.638613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.646352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.646374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.646382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.653182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.653204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.653212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.659237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.659258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.659267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.666835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.666856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.666869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.674255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.674278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.674286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.683010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.683031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.683039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.690494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.690515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.690523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.697717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.697738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.697746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.705470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.705492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.705500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.713385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.713406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.713415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.720783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.720804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.720812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.729020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.729044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.729052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.737303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.737332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.737341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.743761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.743783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.743791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.748757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.748779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.748787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.755746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.755766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.755774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.763826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.763847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.763855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.771239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.771260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.771268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.777944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.777963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.777971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.784375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.784396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.784404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.790763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.790784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.790795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.796974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.796994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.797002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.803115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.803135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.628 [2024-04-24 10:27:51.803143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.628 [2024-04-24 10:27:51.809256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.628 [2024-04-24 10:27:51.809276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.809284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.814956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.814976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.814984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.821247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.821267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.821275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.827005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.827027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.827036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.832375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.832397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.832406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.837867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.837888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.837896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.843799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.843824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.843833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.850248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.850277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.850286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.857241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.857263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.857271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.864384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.864406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.864414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.871595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.871617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.871624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.879047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.879074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.879083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.886180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.886201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.886209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.892828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.892848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.892856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.629 [2024-04-24 10:27:51.899214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.629 [2024-04-24 10:27:51.899235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.629 [2024-04-24 10:27:51.899243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.889 [2024-04-24 10:27:51.905598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.889 [2024-04-24 10:27:51.905619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.889 [2024-04-24 10:27:51.905627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.889 [2024-04-24 10:27:51.911943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.889 [2024-04-24 10:27:51.911964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.911972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.918223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.918243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.918252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.924350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.924371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.924378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.930116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.930138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.930145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.936501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.936522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.936530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.942804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.942824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.942832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.949037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.949058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.949066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.955283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.955303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.955314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.961571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.961592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.961600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.967850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.967871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.967879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.974049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.974075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.974084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.980356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.980376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.980384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.986679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.986700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.986708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.992933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.992954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.992962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:51.998429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:51.998451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:51.998459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.003986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.004007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.004016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.009989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.010013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.010021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.016262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.016283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.016290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.022478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.022499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.022507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.027962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.027984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.027992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.033786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.033808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.033818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.039973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.039996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.040004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.046160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.046181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.046189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.051801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.051822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.051831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.057200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.057222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.057230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.062587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.062608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.062617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.068598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.068619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.890 [2024-04-24 10:27:52.068627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.890 [2024-04-24 10:27:52.074607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.890 [2024-04-24 10:27:52.074629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.074637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.080979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.081000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.081008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.087020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.087043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.087050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.093261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.093284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.093292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.099541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.099563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.099571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.105633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.105654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.105662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.111887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.111908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.111920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.118157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.118178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.118187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.124409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.124430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.124438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.130735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.130757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.130765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.136144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.136166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.136174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.141852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.141875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.141883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.147612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.147634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.147642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.152920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.152942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.152950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.158322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.158344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.158351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.891 [2024-04-24 10:27:52.165092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:38.891 [2024-04-24 10:27:52.165114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.891 [2024-04-24 10:27:52.165123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.172168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.172191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.172199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.179445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.179467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.179475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.187262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.187284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.187292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.194119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.194141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.194149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.200652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.200673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.200681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.206397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.206420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.206428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.209925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.209945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.209953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.215563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.215584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.215596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.221002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.221023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.221031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.226440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.226462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.226470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.232475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.232497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.232506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.238005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.238027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.238035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.243568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.243589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.243598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.249589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.249610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.249618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.255835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.255857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.255865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.262293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.262314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.262322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.268608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.268632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.268640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.275068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.275095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.150 [2024-04-24 10:27:52.275103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.150 [2024-04-24 10:27:52.281417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.150 [2024-04-24 10:27:52.281438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.281446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.288086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.288107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.288117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.294426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.294447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.294455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.301226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.301248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.301256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.306532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.306554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.306562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.312083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.312104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.312112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.317661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.317682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.317690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.323106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.323127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.323135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.328580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.328601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.328609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.334655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.334677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.334685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.341008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.341029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.341037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.347480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.347501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.347508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.353802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.353823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.353832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.359168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.359190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.359198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.364682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.364704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.364712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.370929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.370950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.370962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.377226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.377247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.377255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.383582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.383603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.383612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.388537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.388560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.388568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.393613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.393635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.393643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.399058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.399087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.399095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.404817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.404839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.404848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.411508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.411530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.411538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.417259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.417281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.417290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.151 [2024-04-24 10:27:52.423391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.151 [2024-04-24 10:27:52.423429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.151 [2024-04-24 10:27:52.423437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.429621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.429643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.429651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.435958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.435980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.435989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.442174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.442195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.442203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.448303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.448324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.448332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.454361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.454383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.454391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.460599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.460620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.460629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.467742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.467763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.467770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.477977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.477998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.478008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.486752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.486774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.486782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.494645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.494667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.494676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.504461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.504482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.504491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.515576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.515597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.515606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.524891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.524913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.524921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.535160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.535182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.535190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.545948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.545970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.545977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.555206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.555227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.555235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.563737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.563764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.563772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.571630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.571651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.571659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.578705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.578726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.578735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.590507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.590528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.590536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.600554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.600575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.600583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.609199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.609220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.609229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.616559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.410 [2024-04-24 10:27:52.616580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.410 [2024-04-24 10:27:52.616588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.410 [2024-04-24 10:27:52.624418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.624439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.624447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.411 [2024-04-24 10:27:52.631523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.631544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.631552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.411 [2024-04-24 10:27:52.638385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.638407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.638414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.411 [2024-04-24 10:27:52.645269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.645289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.645297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.411 [2024-04-24 10:27:52.651348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.651369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.651377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.411 [2024-04-24 10:27:52.657613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.657634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.657641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.411 [2024-04-24 10:27:52.664312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.664333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.664341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.411 [2024-04-24 10:27:52.670629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.670650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.670658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.411 [2024-04-24 10:27:52.677500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.677521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.677529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.411 [2024-04-24 10:27:52.685367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.411 [2024-04-24 10:27:52.685389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.411 [2024-04-24 10:27:52.685397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.696123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.696144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.696155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.707342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.707363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.707371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.716709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.716730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.716738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.727512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.727534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.727542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.738249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.738271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.738279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.748158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.748180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.748189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.756122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.756144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.756152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.763438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.763460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.763468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.771001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.771022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.771030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.783645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.783669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.783677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.794290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.794312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.794320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.803574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.803595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.670 [2024-04-24 10:27:52.803603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.670 [2024-04-24 10:27:52.811472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.670 [2024-04-24 10:27:52.811493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.671 [2024-04-24 10:27:52.811501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.671 [2024-04-24 10:27:52.819092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b4a820) 00:32:39.671 [2024-04-24 10:27:52.819113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.671 [2024-04-24 10:27:52.819121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.671 00:32:39.671 Latency(us) 00:32:39.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.671 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:39.671 nvme0n1 : 2.00 4436.40 554.55 0.00 0.00 3603.50 584.13 15044.79 00:32:39.671 =================================================================================================================== 00:32:39.671 Total : 4436.40 554.55 0.00 0.00 3603.50 584.13 15044.79 00:32:39.671 0 00:32:39.671 10:27:52 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:39.671 10:27:52 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:39.671 10:27:52 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:39.671 | .driver_specific 00:32:39.671 | .nvme_error 00:32:39.671 | .status_code 00:32:39.671 | .command_transient_transport_error' 00:32:39.671 10:27:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:39.930 10:27:53 -- host/digest.sh@71 -- # (( 286 > 0 )) 00:32:39.930 10:27:53 -- host/digest.sh@73 -- # killprocess 488786 00:32:39.930 10:27:53 -- common/autotest_common.sh@926 -- # '[' -z 488786 ']' 00:32:39.930 10:27:53 -- common/autotest_common.sh@930 -- # kill -0 488786 00:32:39.930 10:27:53 -- common/autotest_common.sh@931 -- # uname 00:32:39.930 10:27:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:39.930 10:27:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 488786 00:32:39.930 10:27:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:39.930 10:27:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:39.930 10:27:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 488786' 00:32:39.930 killing process with pid 488786 00:32:39.930 10:27:53 -- common/autotest_common.sh@945 -- # kill 488786 00:32:39.930 Received shutdown signal, test time was about 2.000000 seconds 00:32:39.930 00:32:39.930 Latency(us) 00:32:39.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.930 =================================================================================================================== 00:32:39.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:39.930 10:27:53 -- common/autotest_common.sh@950 -- # wait 488786 00:32:40.190 10:27:53 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:32:40.190 10:27:53 -- host/digest.sh@54 -- # local rw bs qd 00:32:40.190 10:27:53 -- host/digest.sh@56 -- # rw=randwrite 00:32:40.190 10:27:53 -- host/digest.sh@56 -- # bs=4096 00:32:40.190 10:27:53 -- host/digest.sh@56 -- # qd=128 00:32:40.190 10:27:53 -- host/digest.sh@58 -- # bperfpid=489357 00:32:40.190 10:27:53 -- host/digest.sh@60 -- # waitforlisten 489357 /var/tmp/bperf.sock 00:32:40.190 10:27:53 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:40.190 10:27:53 -- common/autotest_common.sh@819 -- # '[' -z 489357 ']' 00:32:40.190 10:27:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:40.190 10:27:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:40.190 10:27:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:40.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:40.190 10:27:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:40.190 10:27:53 -- common/autotest_common.sh@10 -- # set +x 00:32:40.190 [2024-04-24 10:27:53.305838] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:40.190 [2024-04-24 10:27:53.305884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489357 ] 00:32:40.190 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.190 [2024-04-24 10:27:53.360820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.190 [2024-04-24 10:27:53.426582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.127 10:27:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:41.127 10:27:54 -- common/autotest_common.sh@852 -- # return 0 00:32:41.127 10:27:54 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:41.127 10:27:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:41.127 10:27:54 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:41.127 10:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:41.127 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:41.127 10:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:41.127 10:27:54 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.127 10:27:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.386 nvme0n1 00:32:41.386 10:27:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:41.386 10:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:41.386 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:32:41.386 10:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:41.386 10:27:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:41.386 10:27:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:41.645 Running I/O for 2 seconds... 00:32:41.645 [2024-04-24 10:27:54.748238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f4f40 00:32:41.645 [2024-04-24 10:27:54.748846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.645 [2024-04-24 10:27:54.748875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:41.645 [2024-04-24 10:27:54.757077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f3e60 00:32:41.645 [2024-04-24 10:27:54.757693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.645 [2024-04-24 10:27:54.757716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:41.645 [2024-04-24 10:27:54.766056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f6458 00:32:41.645 [2024-04-24 10:27:54.767212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.645 [2024-04-24 10:27:54.767233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:41.645 [2024-04-24 10:27:54.774885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f6458 00:32:41.645 [2024-04-24 10:27:54.775864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.645 [2024-04-24 10:27:54.775883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:41.645 [2024-04-24 10:27:54.783900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e95a0 00:32:41.645 [2024-04-24 10:27:54.784922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.645 [2024-04-24 10:27:54.784941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:41.645 [2024-04-24 10:27:54.792817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0ff8 00:32:41.645 [2024-04-24 10:27:54.793825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.645 [2024-04-24 10:27:54.793843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:41.645 [2024-04-24 10:27:54.801677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fd640 00:32:41.645 [2024-04-24 10:27:54.802713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.645 [2024-04-24 10:27:54.802732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:41.645 [2024-04-24 10:27:54.810560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190efae0 00:32:41.645 [2024-04-24 10:27:54.811246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.811265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.819426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ec840 00:32:41.646 [2024-04-24 10:27:54.820518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.820536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.828263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fb048 00:32:41.646 [2024-04-24 10:27:54.829211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.829230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.837146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fac10 00:32:41.646 [2024-04-24 10:27:54.838107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.838125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.845968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fac10 00:32:41.646 [2024-04-24 10:27:54.846913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.846932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.854813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e8088 00:32:41.646 [2024-04-24 10:27:54.855694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.855713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.863619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190eee38 00:32:41.646 [2024-04-24 10:27:54.864368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.864387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.872498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e7818 00:32:41.646 [2024-04-24 10:27:54.873349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.873368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.881419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:41.646 [2024-04-24 10:27:54.882605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.882624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.890274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8618 00:32:41.646 [2024-04-24 10:27:54.891254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.891274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.899523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed4e8 00:32:41.646 [2024-04-24 10:27:54.901123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.901146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.907960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f5378 00:32:41.646 [2024-04-24 10:27:54.908769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.908788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:41.646 [2024-04-24 10:27:54.916820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f6020 00:32:41.646 [2024-04-24 10:27:54.917655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.646 [2024-04-24 10:27:54.917674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:54.925989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f31b8 00:32:41.906 [2024-04-24 10:27:54.926852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:54.926870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:54.934966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2948 00:32:41.906 [2024-04-24 10:27:54.935833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:54.935851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:54.943863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fef90 00:32:41.906 [2024-04-24 10:27:54.944777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:54.944798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:54.952907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190efae0 00:32:41.906 [2024-04-24 10:27:54.953858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:54.953877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:54.962699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fc560 00:32:41.906 [2024-04-24 10:27:54.963220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:54.963239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:54.971866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ff3c8 00:32:41.906 [2024-04-24 10:27:54.972845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:54.972863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:54.980745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fc998 00:32:41.906 [2024-04-24 10:27:54.981820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:54.981839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:54.989614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f9b30 00:32:41.906 [2024-04-24 10:27:54.990625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:54.990644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:54.998462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f9f68 00:32:41.906 [2024-04-24 10:27:54.999482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:54.999501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.007522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ec840 00:32:41.906 [2024-04-24 10:27:55.008628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.008647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.016344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f5be8 00:32:41.906 [2024-04-24 10:27:55.017476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.017495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.025188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ecc78 00:32:41.906 [2024-04-24 10:27:55.026374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.026393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.034050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f46d0 00:32:41.906 [2024-04-24 10:27:55.035191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.035211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.042880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ef270 00:32:41.906 [2024-04-24 10:27:55.044057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.044082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.051727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2510 00:32:41.906 [2024-04-24 10:27:55.052977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.052996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.060560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ee5c8 00:32:41.906 [2024-04-24 10:27:55.061808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.061827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.069371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0350 00:32:41.906 [2024-04-24 10:27:55.070648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.070668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.078327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fbcf0 00:32:41.906 [2024-04-24 10:27:55.079545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.079563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:41.906 [2024-04-24 10:27:55.087452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fc560 00:32:41.906 [2024-04-24 10:27:55.088675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.906 [2024-04-24 10:27:55.088694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.096620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2948 00:32:41.907 [2024-04-24 10:27:55.097391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.097409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.104172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e6b70 00:32:41.907 [2024-04-24 10:27:55.105056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.105081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.113279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fc998 00:32:41.907 [2024-04-24 10:27:55.114421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.114440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.122089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fc560 00:32:41.907 [2024-04-24 10:27:55.123085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.123104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.130983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ee5c8 00:32:41.907 [2024-04-24 10:27:55.131881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.131904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.139796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e7c50 00:32:41.907 [2024-04-24 10:27:55.140675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.140694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.148605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e7c50 00:32:41.907 [2024-04-24 10:27:55.150024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.150043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.157396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e7c50 00:32:41.907 [2024-04-24 10:27:55.158404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.158423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.166200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e7c50 00:32:41.907 [2024-04-24 10:27:55.167302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.167320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:41.907 [2024-04-24 10:27:55.174425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f9f68 00:32:41.907 [2024-04-24 10:27:55.174858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:41.907 [2024-04-24 10:27:55.174877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.183621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f46d0 00:32:42.168 [2024-04-24 10:27:55.184431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.184451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.192681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e9168 00:32:42.168 [2024-04-24 10:27:55.193534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.193553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.201515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f7970 00:32:42.168 [2024-04-24 10:27:55.202373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.202392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.210351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fb048 00:32:42.168 [2024-04-24 10:27:55.211200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.211220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.219220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f31b8 00:32:42.168 [2024-04-24 10:27:55.220075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.220094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.229405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ef6a8 00:32:42.168 [2024-04-24 10:27:55.230640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.230659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.237161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e8088 00:32:42.168 [2024-04-24 10:27:55.237985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.238003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.246406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f3a28 00:32:42.168 [2024-04-24 10:27:55.247474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.247493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.255123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f1ca0 00:32:42.168 [2024-04-24 10:27:55.256445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.256464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.264080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f7970 00:32:42.168 [2024-04-24 10:27:55.265477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.265497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.272051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e6738 00:32:42.168 [2024-04-24 10:27:55.272700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.272718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.280938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fc998 00:32:42.168 [2024-04-24 10:27:55.281598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.281618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.289836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e7c50 00:32:42.168 [2024-04-24 10:27:55.290606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.290626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.298670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190edd58 00:32:42.168 [2024-04-24 10:27:55.299435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.299454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.308885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190eaef0 00:32:42.168 [2024-04-24 10:27:55.310045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.310064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.317714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f35f0 00:32:42.168 [2024-04-24 10:27:55.318894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.318912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.326759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed0b0 00:32:42.168 [2024-04-24 10:27:55.327996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.328015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.335647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f31b8 00:32:42.168 [2024-04-24 10:27:55.336775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.336794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.344505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fe720 00:32:42.168 [2024-04-24 10:27:55.345686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.345704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.353347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.168 [2024-04-24 10:27:55.354580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.354599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.362187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f31b8 00:32:42.168 [2024-04-24 10:27:55.363423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.168 [2024-04-24 10:27:55.363445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:42.168 [2024-04-24 10:27:55.371023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f57b0 00:32:42.168 [2024-04-24 10:27:55.372285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.169 [2024-04-24 10:27:55.372303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:42.169 [2024-04-24 10:27:55.379693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f7970 00:32:42.169 [2024-04-24 10:27:55.380319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.169 [2024-04-24 10:27:55.380338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.169 [2024-04-24 10:27:55.387090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f6890 00:32:42.169 [2024-04-24 10:27:55.387904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.169 [2024-04-24 10:27:55.387922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:42.169 [2024-04-24 10:27:55.396416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2d80 00:32:42.169 [2024-04-24 10:27:55.396896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.169 [2024-04-24 10:27:55.396915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.169 [2024-04-24 10:27:55.405353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e6fa8 00:32:42.169 [2024-04-24 10:27:55.406182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.169 [2024-04-24 10:27:55.406200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:42.169 [2024-04-24 10:27:55.414379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e7c50 00:32:42.169 [2024-04-24 10:27:55.415237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.169 [2024-04-24 10:27:55.415255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:42.169 [2024-04-24 10:27:55.423236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0350 00:32:42.169 [2024-04-24 10:27:55.424118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.169 [2024-04-24 10:27:55.424137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:42.169 [2024-04-24 10:27:55.432106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0788 00:32:42.169 [2024-04-24 10:27:55.433012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.169 [2024-04-24 10:27:55.433031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:42.169 [2024-04-24 10:27:55.441003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fb048 00:32:42.169 [2024-04-24 10:27:55.441969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.169 [2024-04-24 10:27:55.441988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:42.428 [2024-04-24 10:27:55.450111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fc998 00:32:42.428 [2024-04-24 10:27:55.451117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.428 [2024-04-24 10:27:55.451136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:42.428 [2024-04-24 10:27:55.459201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fd640 00:32:42.428 [2024-04-24 10:27:55.460214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.428 [2024-04-24 10:27:55.460233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:42.428 [2024-04-24 10:27:55.468299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f6cc8 00:32:42.428 [2024-04-24 10:27:55.469329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.428 [2024-04-24 10:27:55.469347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:42.428 [2024-04-24 10:27:55.477378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f9b30 00:32:42.428 [2024-04-24 10:27:55.478413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.428 [2024-04-24 10:27:55.478432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:42.428 [2024-04-24 10:27:55.486712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ebb98 00:32:42.428 [2024-04-24 10:27:55.487682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.428 [2024-04-24 10:27:55.487701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:42.428 [2024-04-24 10:27:55.495582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2d80 00:32:42.429 [2024-04-24 10:27:55.496581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.496601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.504462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fc560 00:32:42.429 [2024-04-24 10:27:55.505473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.505493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.513466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f35f0 00:32:42.429 [2024-04-24 10:27:55.514509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.514527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.522358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e8088 00:32:42.429 [2024-04-24 10:27:55.523419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.523438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.531502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190eea00 00:32:42.429 [2024-04-24 10:27:55.532230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.532249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.541353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fa3a0 00:32:42.429 [2024-04-24 10:27:55.542500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.542518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.549134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ec408 00:32:42.429 [2024-04-24 10:27:55.549814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.549833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.557932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f4b08 00:32:42.429 [2024-04-24 10:27:55.558484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.558503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.566795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f35f0 00:32:42.429 [2024-04-24 10:27:55.567339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.567358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.575643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e6738 00:32:42.429 [2024-04-24 10:27:55.576186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.576204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.584487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0bc0 00:32:42.429 [2024-04-24 10:27:55.585031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.585052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.593329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0ff8 00:32:42.429 [2024-04-24 10:27:55.593935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.593956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.602223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ef270 00:32:42.429 [2024-04-24 10:27:55.602870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.602889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.611078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0788 00:32:42.429 [2024-04-24 10:27:55.611635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.611653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.619826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0bc0 00:32:42.429 [2024-04-24 10:27:55.621092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.621110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.628645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190eb328 00:32:42.429 [2024-04-24 10:27:55.629612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.629631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.637503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f96f8 00:32:42.429 [2024-04-24 10:27:55.638481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.638500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.646343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:42.429 [2024-04-24 10:27:55.647330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.647349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.655195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f20d8 00:32:42.429 [2024-04-24 10:27:55.656194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.656213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.664024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f20d8 00:32:42.429 [2024-04-24 10:27:55.665029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.665048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.672803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:42.429 [2024-04-24 10:27:55.673684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.673703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.681711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f96f8 00:32:42.429 [2024-04-24 10:27:55.682898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.429 [2024-04-24 10:27:55.682916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:42.429 [2024-04-24 10:27:55.690516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ecc78 00:32:42.430 [2024-04-24 10:27:55.691633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.430 [2024-04-24 10:27:55.691652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:42.430 [2024-04-24 10:27:55.699467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e5658 00:32:42.430 [2024-04-24 10:27:55.700593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.430 [2024-04-24 10:27:55.700611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:42.689 [2024-04-24 10:27:55.708501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f92c0 00:32:42.689 [2024-04-24 10:27:55.709419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.689 [2024-04-24 10:27:55.709438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:42.689 [2024-04-24 10:27:55.717473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f5378 00:32:42.689 [2024-04-24 10:27:55.718378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.689 [2024-04-24 10:27:55.718397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:42.689 [2024-04-24 10:27:55.726291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e8d30 00:32:42.689 [2024-04-24 10:27:55.727514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.689 [2024-04-24 10:27:55.727533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:42.689 [2024-04-24 10:27:55.735112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2948 00:32:42.689 [2024-04-24 10:27:55.736386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.689 [2024-04-24 10:27:55.736404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:42.689 [2024-04-24 10:27:55.743896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f7970 00:32:42.689 [2024-04-24 10:27:55.744965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.689 [2024-04-24 10:27:55.744983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:42.689 [2024-04-24 10:27:55.752709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e5658 00:32:42.689 [2024-04-24 10:27:55.754054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.689 [2024-04-24 10:27:55.754077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:42.689 [2024-04-24 10:27:55.761521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fd640 00:32:42.689 [2024-04-24 10:27:55.762923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.689 [2024-04-24 10:27:55.762942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:42.689 [2024-04-24 10:27:55.770520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f7970 00:32:42.689 [2024-04-24 10:27:55.771970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.771988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.780186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fef90 00:32:42.690 [2024-04-24 10:27:55.781569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.781587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.787840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e73e0 00:32:42.690 [2024-04-24 10:27:55.788447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.788465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.796653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fe2e8 00:32:42.690 [2024-04-24 10:27:55.797267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.797285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.805535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f1430 00:32:42.690 [2024-04-24 10:27:55.806169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.806188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.814388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8618 00:32:42.690 [2024-04-24 10:27:55.815015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.815033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.823244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f3a28 00:32:42.690 [2024-04-24 10:27:55.823884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.823906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.832122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e6fa8 00:32:42.690 [2024-04-24 10:27:55.832783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.832800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.840972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fc560 00:32:42.690 [2024-04-24 10:27:55.841625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.841645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.849799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8e88 00:32:42.690 [2024-04-24 10:27:55.850468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.850486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.858629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.859301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.859319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.867445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.868133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.868151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.876279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.876974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.876992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.885119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.885818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.885836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.893956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.894662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.894680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.902801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.903518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.903540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.911662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.912393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.912414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.920508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.921249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.921269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.929354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.930103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.930121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.938186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.938943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.938961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.947034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.947805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.947823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.955873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.956652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.956670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.690 [2024-04-24 10:27:55.964795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.690 [2024-04-24 10:27:55.965601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.690 [2024-04-24 10:27:55.965620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:55.973873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.950 [2024-04-24 10:27:55.974664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:55.974683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:55.982766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.950 [2024-04-24 10:27:55.983576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:55.983595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:55.991607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.950 [2024-04-24 10:27:55.992432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:55.992451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.000458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.950 [2024-04-24 10:27:56.001328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.001347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.009290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.950 [2024-04-24 10:27:56.010185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.010211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.019154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.950 [2024-04-24 10:27:56.020306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.020325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.028412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f8a50 00:32:42.950 [2024-04-24 10:27:56.029387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.029406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.036343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f5378 00:32:42.950 [2024-04-24 10:27:56.037212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.037231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.045350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f5378 00:32:42.950 [2024-04-24 10:27:56.046260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.046279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.054356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f5378 00:32:42.950 [2024-04-24 10:27:56.055306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.055325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.064327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f5378 00:32:42.950 [2024-04-24 10:27:56.065193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.065212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.073269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f9b30 00:32:42.950 [2024-04-24 10:27:56.074381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.074399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:42.950 [2024-04-24 10:27:56.082219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e8088 00:32:42.950 [2024-04-24 10:27:56.083172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.950 [2024-04-24 10:27:56.083190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.091081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f1430 00:32:42.951 [2024-04-24 10:27:56.091961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.091979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.099917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f9f68 00:32:42.951 [2024-04-24 10:27:56.100846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.100865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.108775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fb480 00:32:42.951 [2024-04-24 10:27:56.109499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.109517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.117655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f4298 00:32:42.951 [2024-04-24 10:27:56.118258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.118277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.125294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f20d8 00:32:42.951 [2024-04-24 10:27:56.125485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.125502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.134476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f96f8 00:32:42.951 [2024-04-24 10:27:56.135165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.135189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.143382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f4f40 00:32:42.951 [2024-04-24 10:27:56.144059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.144084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.152229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f20d8 00:32:42.951 [2024-04-24 10:27:56.152920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.152938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.161054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ff3c8 00:32:42.951 [2024-04-24 10:27:56.161758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.161776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.169893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ef270 00:32:42.951 [2024-04-24 10:27:56.170601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.170619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.178736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e5220 00:32:42.951 [2024-04-24 10:27:56.179449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.179467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.187570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f9b30 00:32:42.951 [2024-04-24 10:27:56.188296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.188314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.196403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2510 00:32:42.951 [2024-04-24 10:27:56.197139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.197157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.205229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f7970 00:32:42.951 [2024-04-24 10:27:56.205967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.205986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.214044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:42.951 [2024-04-24 10:27:56.214804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.214823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:42.951 [2024-04-24 10:27:56.222929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:42.951 [2024-04-24 10:27:56.223720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.951 [2024-04-24 10:27:56.223739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.232109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.232899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.232917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.241025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.241811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.241829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.249878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.250673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.250691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.258704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.259505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.259523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.267539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.268351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.268369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.276390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.277210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.277228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.285439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.286264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.286284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.294404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.295241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.295260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.303364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.304230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.304249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.312341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.313217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.313236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.321305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.322202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.322230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.330390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.331289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.331308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.339276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.340158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.340177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.348142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.349032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.349051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.356984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.357889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.357907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.365901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ed920 00:32:43.211 [2024-04-24 10:27:56.366807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.366829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.374771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2948 00:32:43.211 [2024-04-24 10:27:56.375691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.375711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.383648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ef6a8 00:32:43.211 [2024-04-24 10:27:56.384594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.384612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.392514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ef6a8 00:32:43.211 [2024-04-24 10:27:56.393454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.393473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.401557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2948 00:32:43.211 [2024-04-24 10:27:56.402444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.402464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:43.211 [2024-04-24 10:27:56.410677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e5a90 00:32:43.211 [2024-04-24 10:27:56.411597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.211 [2024-04-24 10:27:56.411616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:43.212 [2024-04-24 10:27:56.419856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e5a90 00:32:43.212 [2024-04-24 10:27:56.420798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.212 [2024-04-24 10:27:56.420817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:43.212 [2024-04-24 10:27:56.428884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fa3a0 00:32:43.212 [2024-04-24 10:27:56.429907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.212 [2024-04-24 10:27:56.429926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:43.212 [2024-04-24 10:27:56.437841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0788 00:32:43.212 [2024-04-24 10:27:56.438831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.212 [2024-04-24 10:27:56.438850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:43.212 [2024-04-24 10:27:56.446746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e6738 00:32:43.212 [2024-04-24 10:27:56.447579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.212 [2024-04-24 10:27:56.447597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:43.212 [2024-04-24 10:27:56.455574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f81e0 00:32:43.212 [2024-04-24 10:27:56.456657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.212 [2024-04-24 10:27:56.456676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:43.212 [2024-04-24 10:27:56.465609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fcdd0 00:32:43.212 [2024-04-24 10:27:56.466315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.212 [2024-04-24 10:27:56.466334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.212 [2024-04-24 10:27:56.474400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0788 00:32:43.212 [2024-04-24 10:27:56.475172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.212 [2024-04-24 10:27:56.475191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.212 [2024-04-24 10:27:56.483206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e5a90 00:32:43.212 [2024-04-24 10:27:56.484076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.212 [2024-04-24 10:27:56.484095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.492312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f7538 00:32:43.472 [2024-04-24 10:27:56.493194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.493213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.501241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e9168 00:32:43.472 [2024-04-24 10:27:56.502129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.502147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.510040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e8088 00:32:43.472 [2024-04-24 10:27:56.510870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.510888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.518782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fa7d8 00:32:43.472 [2024-04-24 10:27:56.519439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.519458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.527598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ea680 00:32:43.472 [2024-04-24 10:27:56.528393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.528412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.536618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f5378 00:32:43.472 [2024-04-24 10:27:56.537312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.537330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.545450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e5ec8 00:32:43.472 [2024-04-24 10:27:56.546162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.546180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.554268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fcdd0 00:32:43.472 [2024-04-24 10:27:56.554930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.554948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.563110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fd640 00:32:43.472 [2024-04-24 10:27:56.564307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.564326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.571916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e6738 00:32:43.472 [2024-04-24 10:27:56.572953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.572972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.580760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f1ca0 00:32:43.472 [2024-04-24 10:27:56.581808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.581827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.589592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190ebb98 00:32:43.472 [2024-04-24 10:27:56.590649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.590667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.598430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e9e10 00:32:43.472 [2024-04-24 10:27:56.599490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.599512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.607242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e6300 00:32:43.472 [2024-04-24 10:27:56.608315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.608334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.616081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e88f8 00:32:43.472 [2024-04-24 10:27:56.617173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.617192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.624931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f2d80 00:32:43.472 [2024-04-24 10:27:56.626027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.626045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.633846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e95a0 00:32:43.472 [2024-04-24 10:27:56.634953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.634972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.642742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e7c50 00:32:43.472 [2024-04-24 10:27:56.643847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.643866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.650505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190eb328 00:32:43.472 [2024-04-24 10:27:56.651120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.651139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.659352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fd640 00:32:43.472 [2024-04-24 10:27:56.659964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.659982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.668214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fd640 00:32:43.472 [2024-04-24 10:27:56.668844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.668862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.677081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190eb328 00:32:43.472 [2024-04-24 10:27:56.677721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.677740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.685926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190fe2e8 00:32:43.472 [2024-04-24 10:27:56.686665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.686683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.694783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f0788 00:32:43.472 [2024-04-24 10:27:56.695528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.695546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.703717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f92c0 00:32:43.472 [2024-04-24 10:27:56.704473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.472 [2024-04-24 10:27:56.704491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:43.472 [2024-04-24 10:27:56.712587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190e6300 00:32:43.472 [2024-04-24 10:27:56.713374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.473 [2024-04-24 10:27:56.713393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:43.473 [2024-04-24 10:27:56.721409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f7970 00:32:43.473 [2024-04-24 10:27:56.722235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.473 [2024-04-24 10:27:56.722255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:43.473 [2024-04-24 10:27:56.730289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190f35f0 00:32:43.473 [2024-04-24 10:27:56.731044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.473 [2024-04-24 10:27:56.731064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:43.473 [2024-04-24 10:27:56.739207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2402490) with pdu=0x2000190eb328 00:32:43.473 [2024-04-24 10:27:56.739983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:43.473 [2024-04-24 10:27:56.740003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:43.473 00:32:43.473 Latency(us) 00:32:43.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.473 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.473 nvme0n1 : 2.00 28698.67 112.10 0.00 0.00 4455.58 2194.03 10599.74 00:32:43.473 =================================================================================================================== 00:32:43.473 Total : 28698.67 112.10 0.00 0.00 4455.58 2194.03 10599.74 00:32:43.473 0 00:32:43.732 10:27:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:43.732 10:27:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:43.732 10:27:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:43.732 | .driver_specific 00:32:43.732 | .nvme_error 00:32:43.732 | .status_code 00:32:43.732 | .command_transient_transport_error' 00:32:43.732 10:27:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:43.732 10:27:56 -- host/digest.sh@71 -- # (( 225 > 0 )) 00:32:43.732 10:27:56 -- host/digest.sh@73 -- # killprocess 489357 00:32:43.732 10:27:56 -- common/autotest_common.sh@926 -- # '[' -z 489357 ']' 00:32:43.732 10:27:56 -- common/autotest_common.sh@930 -- # kill -0 489357 00:32:43.732 10:27:56 -- common/autotest_common.sh@931 -- # uname 00:32:43.732 10:27:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:43.732 10:27:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 489357 00:32:43.732 10:27:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:43.732 10:27:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:43.732 10:27:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 489357' 00:32:43.732 killing process with pid 489357 00:32:43.732 10:27:56 -- common/autotest_common.sh@945 -- # kill 489357 00:32:43.732 Received shutdown signal, test time was about 2.000000 seconds 00:32:43.732 00:32:43.732 Latency(us) 00:32:43.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.732 =================================================================================================================== 00:32:43.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:43.732 10:27:56 -- common/autotest_common.sh@950 -- # wait 489357 00:32:43.992 10:27:57 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:32:43.992 10:27:57 -- host/digest.sh@54 -- # local rw bs qd 00:32:43.992 10:27:57 -- host/digest.sh@56 -- # rw=randwrite 00:32:43.992 10:27:57 -- host/digest.sh@56 -- # bs=131072 00:32:43.992 10:27:57 -- host/digest.sh@56 -- # qd=16 00:32:43.992 10:27:57 -- host/digest.sh@58 -- # bperfpid=490068 00:32:43.992 10:27:57 -- host/digest.sh@60 -- # waitforlisten 490068 /var/tmp/bperf.sock 00:32:43.992 10:27:57 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:43.992 10:27:57 -- common/autotest_common.sh@819 -- # '[' -z 490068 ']' 00:32:43.992 10:27:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:43.992 10:27:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:43.992 10:27:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:43.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:43.992 10:27:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:43.992 10:27:57 -- common/autotest_common.sh@10 -- # set +x 00:32:43.992 [2024-04-24 10:27:57.243216] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:43.992 [2024-04-24 10:27:57.243262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490068 ] 00:32:43.992 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:43.992 Zero copy mechanism will not be used. 00:32:43.992 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.251 [2024-04-24 10:27:57.296043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.251 [2024-04-24 10:27:57.363269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.819 10:27:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:44.819 10:27:58 -- common/autotest_common.sh@852 -- # return 0 00:32:44.819 10:27:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:44.819 10:27:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.077 10:27:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:45.077 10:27:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:45.077 10:27:58 -- common/autotest_common.sh@10 -- # set +x 00:32:45.077 10:27:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:45.077 10:27:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.077 10:27:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.646 nvme0n1 00:32:45.646 10:27:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:45.646 10:27:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:45.646 10:27:58 -- common/autotest_common.sh@10 -- # set +x 00:32:45.646 10:27:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:45.646 10:27:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:45.646 10:27:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:45.646 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:45.646 Zero copy mechanism will not be used. 00:32:45.646 Running I/O for 2 seconds... 00:32:45.646 [2024-04-24 10:27:58.768632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.768810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.768840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.777570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.777659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.777682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.785029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.785106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.785127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.790283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.790438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.790457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.794977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.795143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.795162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.800786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.800973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.800995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.807210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.807389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.807407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.812644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.812859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.812879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.817811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.818096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.818116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.823217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.823383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.823401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.829136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.829207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.829225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.834221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.834307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.834326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.839107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.839209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.839227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.843895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.843995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.844013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.848885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.849034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.849051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.854106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.854311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.854330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.858933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.859215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.859235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.646 [2024-04-24 10:27:58.863770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.646 [2024-04-24 10:27:58.863911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.646 [2024-04-24 10:27:58.863929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.868425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.868500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.868518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.873056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.873159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.873177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.877751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.877868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.877885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.882451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.882536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.882554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.887009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.887124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.887142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.892274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.892480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.892500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.897887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.898165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.898184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.905176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.905365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.905381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.912194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.912343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.912360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.647 [2024-04-24 10:27:58.922544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.647 [2024-04-24 10:27:58.922844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.647 [2024-04-24 10:27:58.922865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.931950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.907 [2024-04-24 10:27:58.932129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.907 [2024-04-24 10:27:58.932146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.939779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.907 [2024-04-24 10:27:58.939968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.907 [2024-04-24 10:27:58.939986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.947859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.907 [2024-04-24 10:27:58.948030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.907 [2024-04-24 10:27:58.948047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.955524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.907 [2024-04-24 10:27:58.955702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.907 [2024-04-24 10:27:58.955723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.963255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.907 [2024-04-24 10:27:58.963638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.907 [2024-04-24 10:27:58.963657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.969766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.907 [2024-04-24 10:27:58.969906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.907 [2024-04-24 10:27:58.969923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.976141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.907 [2024-04-24 10:27:58.976265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.907 [2024-04-24 10:27:58.976282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.982292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.907 [2024-04-24 10:27:58.982415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.907 [2024-04-24 10:27:58.982433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.987456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.907 [2024-04-24 10:27:58.987600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.907 [2024-04-24 10:27:58.987617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.907 [2024-04-24 10:27:58.992121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:58.992231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:58.992249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:58.997170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:58.997281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:58.997298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.002578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.002674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.002691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.007986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.008233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.008251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.013128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.013324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.013343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.017790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.017928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.017945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.022433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.022590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.022609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.027039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.027109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.027127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.031592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.031735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.031752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.037301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.037531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.037550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.042608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.042780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.042798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.048752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.048932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.048950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.054538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.054712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.054730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.059769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.059999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.060018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.066556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.066706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.066724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.071993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.072109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.072127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.076428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.076549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.076567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.081495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.081719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.081737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.086312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.086504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.086522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.091483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.091715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.091734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.096117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.096297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.096319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.100657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.100791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.100809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.105997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.106148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.106166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.111015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.111129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.111147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.115591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.115741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.115760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.120307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.120569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.120588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.125204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.125384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.908 [2024-04-24 10:27:59.125404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.908 [2024-04-24 10:27:59.130041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.908 [2024-04-24 10:27:59.130268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.909 [2024-04-24 10:27:59.130287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.909 [2024-04-24 10:27:59.136820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.909 [2024-04-24 10:27:59.136911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.909 [2024-04-24 10:27:59.136928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.909 [2024-04-24 10:27:59.143935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.909 [2024-04-24 10:27:59.144035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.909 [2024-04-24 10:27:59.144053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.909 [2024-04-24 10:27:59.150345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.909 [2024-04-24 10:27:59.150474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.909 [2024-04-24 10:27:59.150492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.909 [2024-04-24 10:27:59.157625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.909 [2024-04-24 10:27:59.157755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.909 [2024-04-24 10:27:59.157772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.909 [2024-04-24 10:27:59.163787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.909 [2024-04-24 10:27:59.163865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.909 [2024-04-24 10:27:59.163883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.909 [2024-04-24 10:27:59.169256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.909 [2024-04-24 10:27:59.169414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.909 [2024-04-24 10:27:59.169431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.909 [2024-04-24 10:27:59.174790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.909 [2024-04-24 10:27:59.174885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.909 [2024-04-24 10:27:59.174903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.909 [2024-04-24 10:27:59.180454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:45.909 [2024-04-24 10:27:59.180699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.909 [2024-04-24 10:27:59.180718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.169 [2024-04-24 10:27:59.187262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.169 [2024-04-24 10:27:59.187425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.169 [2024-04-24 10:27:59.187442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.169 [2024-04-24 10:27:59.192978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.169 [2024-04-24 10:27:59.193049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.169 [2024-04-24 10:27:59.193066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.169 [2024-04-24 10:27:59.199616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.169 [2024-04-24 10:27:59.199937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.169 [2024-04-24 10:27:59.199957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.169 [2024-04-24 10:27:59.212243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.169 [2024-04-24 10:27:59.212369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.169 [2024-04-24 10:27:59.212389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.169 [2024-04-24 10:27:59.220142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.169 [2024-04-24 10:27:59.220394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.169 [2024-04-24 10:27:59.220414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.169 [2024-04-24 10:27:59.227134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.169 [2024-04-24 10:27:59.227348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.169 [2024-04-24 10:27:59.227367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.169 [2024-04-24 10:27:59.232783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.169 [2024-04-24 10:27:59.232947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.169 [2024-04-24 10:27:59.232965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.237962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.238174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.238192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.243525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.243699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.243716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.248351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.248465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.248483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.252842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.252998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.253020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.257634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.257723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.257741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.262421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.262527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.262545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.268061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.268227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.268245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.275627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.275866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.275883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.282765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.283000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.283020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.289833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.289945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.289963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.297553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.297710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.297728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.305347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.305559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.305578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.313457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.313651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.313668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.321364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.321535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.321551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.329268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.329577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.329595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.337261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.337444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.337461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.344820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.345108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.345129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.352918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.353093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.353111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.360611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.360819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.360837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.367752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.367919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.367937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.375514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.375712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.375730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.382325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.382468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.382486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.388098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.388304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.388323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.392896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.393078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.393096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.397241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.397537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.397556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.401552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.401831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.401850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.170 [2024-04-24 10:27:59.406260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.170 [2024-04-24 10:27:59.406457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.170 [2024-04-24 10:27:59.406475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.171 [2024-04-24 10:27:59.411464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.171 [2024-04-24 10:27:59.411624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.171 [2024-04-24 10:27:59.411642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.171 [2024-04-24 10:27:59.415668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.171 [2024-04-24 10:27:59.415795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.171 [2024-04-24 10:27:59.415813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.171 [2024-04-24 10:27:59.420151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.171 [2024-04-24 10:27:59.420275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.171 [2024-04-24 10:27:59.420296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.171 [2024-04-24 10:27:59.424997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.171 [2024-04-24 10:27:59.425101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.171 [2024-04-24 10:27:59.425118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.171 [2024-04-24 10:27:59.429615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.171 [2024-04-24 10:27:59.429771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.171 [2024-04-24 10:27:59.429789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.171 [2024-04-24 10:27:59.434651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.171 [2024-04-24 10:27:59.434973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.171 [2024-04-24 10:27:59.434992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.171 [2024-04-24 10:27:59.439716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.171 [2024-04-24 10:27:59.439944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.171 [2024-04-24 10:27:59.439963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.171 [2024-04-24 10:27:59.443954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.171 [2024-04-24 10:27:59.444129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.171 [2024-04-24 10:27:59.444147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.448909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.449093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.449111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.453402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.453503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.453521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.458256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.458385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.458402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.463380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.463622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.463641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.468372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.468538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.468555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.472667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.472830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.472848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.477741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.478053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.478078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.483832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.484091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.484111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.490811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.490949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.490967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.497023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.497134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.497151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.503705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.503844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.503862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.508550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.508712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.508729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.513545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.513668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.513685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.518302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.518574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.518593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.523336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.523566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.523585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.528023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.528173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.528192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.532615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.532739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.532757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.432 [2024-04-24 10:27:59.537855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.432 [2024-04-24 10:27:59.538006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.432 [2024-04-24 10:27:59.538024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.544178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.544336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.544354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.551812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.552036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.552055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.557509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.557657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.557679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.562720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.563010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.563028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.567298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.567617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.567637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.572342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.572546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.572566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.576624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.576735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.576753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.581429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.581570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.581588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.586094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.586277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.586294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.591050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.591218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.591236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.596564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.596706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.596725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.602144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.602422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.602441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.608089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.608282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.608300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.613930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.614119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.614137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.619533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.619615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.619633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.625213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.625297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.625315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.630896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.630980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.630998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.636844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.637001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.637019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.642668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.642793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.642810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.648213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.648465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.648488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.654144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.654373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.654393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.660248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.660655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.660674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.666146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.666214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.666232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.672409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.672510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.672527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.678186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.678360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.678377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.683156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.683331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.683349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.687881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.433 [2024-04-24 10:27:59.688059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.433 [2024-04-24 10:27:59.688085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.433 [2024-04-24 10:27:59.692673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.434 [2024-04-24 10:27:59.692918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.434 [2024-04-24 10:27:59.692937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.434 [2024-04-24 10:27:59.697251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.434 [2024-04-24 10:27:59.697559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.434 [2024-04-24 10:27:59.697577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.434 [2024-04-24 10:27:59.701947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.434 [2024-04-24 10:27:59.702135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.434 [2024-04-24 10:27:59.702155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.434 [2024-04-24 10:27:59.707238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.434 [2024-04-24 10:27:59.707367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.434 [2024-04-24 10:27:59.707385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.712630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.712719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.712737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.716925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.717036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.717054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.721505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.721669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.721689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.725851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.725984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.726001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.731298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.731599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.731618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.738336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.738678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.738698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.744898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.745337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.745355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.752715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.752959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.752977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.760101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.760287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.760305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.767413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.767577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.767595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.774624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.774919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.774936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.782244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.782423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.782442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.789530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.789700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.789718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.797098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.797678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.797697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.810914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.811205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.694 [2024-04-24 10:27:59.811229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.694 [2024-04-24 10:27:59.820321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.694 [2024-04-24 10:27:59.820434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.820452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.827186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.827321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.827339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.833739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.833887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.833905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.839114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.839290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.839307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.844020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.844183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.844201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.850757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.851126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.851146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.856543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.856898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.856917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.862343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.862448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.862466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.867643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.867804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.867821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.873159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.873267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.873284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.878955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.879089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.879109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.884550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.884754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.884771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.890264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.890365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.890383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.896045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.896310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.896329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.901743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.901846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.901864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.907934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.908055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.908079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.913484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.913678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.913695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.918616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.918688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.918706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.924581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.924662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.924680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.930054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.930356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.930375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.935284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.935469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.935486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.941116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.941362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.941380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.946494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.946591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.946610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.951735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.951867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.951884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.956759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.956974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.956993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.962844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.963077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.963100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.695 [2024-04-24 10:27:59.969969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.695 [2024-04-24 10:27:59.970138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.695 [2024-04-24 10:27:59.970156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.955 [2024-04-24 10:27:59.976545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.955 [2024-04-24 10:27:59.976816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.955 [2024-04-24 10:27:59.976835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:27:59.984582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:27:59.984786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:27:59.984805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:27:59.991670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:27:59.991931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:27:59.991950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:27:59.998721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:27:59.998889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:27:59.998907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.005984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.006128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.006147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.012056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.012241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.012259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.017067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.017201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.017220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.021939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.022066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.022091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.026968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.027212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.027231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.031863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.032045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.032063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.037574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.037791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.037809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.042750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.042919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.042938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.047626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.047718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.047737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.052263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.052439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.052458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.057285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.057436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.057454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.062063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.062179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.062199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.066909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.067119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.067139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.071581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.071798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.071819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.077152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.077370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.077389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.083509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.083803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.083822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.088423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.088678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.088697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.094005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.094253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.094271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.099306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.099488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.099508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.104015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.104096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.104115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.108784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.108962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.108984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.113751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.113870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.113887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.118848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.118944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.118961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.125196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.956 [2024-04-24 10:28:00.125505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.956 [2024-04-24 10:28:00.125523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.956 [2024-04-24 10:28:00.132199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.132409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.132428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.139940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.140180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.140199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.147623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.147751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.147768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.154448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.154613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.154631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.162157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.162388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.162406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.169837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.170114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.170133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.176906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.177094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.177112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.184437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.184617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.184635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.191737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.191934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.191961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.198737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.199078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.199097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.206546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.206693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.206710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.213922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.214098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.214117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.221651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.221832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.221850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.957 [2024-04-24 10:28:00.229763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:46.957 [2024-04-24 10:28:00.229982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.957 [2024-04-24 10:28:00.230002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.237364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.237561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.237579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.245473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.245776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.245795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.253217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.253405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.253423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.260663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.260926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.260945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.268428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.268600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.268617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.276181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.276265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.276283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.281631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.281835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.281853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.286313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.286487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.286505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.290491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.290619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.290640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.218 [2024-04-24 10:28:00.294667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.218 [2024-04-24 10:28:00.294878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.218 [2024-04-24 10:28:00.294896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.299394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.299668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.299687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.304222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.304444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.304463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.309477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.309598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.309616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.314501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.314663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.314680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.320495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.320672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.320689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.325163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.325314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.325331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.329591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.329664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.329682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.334489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.334720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.334739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.339273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.339515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.339534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.344587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.344838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.344857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.350394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.350588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.350607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.357390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.357519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.357537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.363703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.363902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.363921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.371498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.371618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.371635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.377226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.377333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.377351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.382839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.382977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.382995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.388809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.388904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.388922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.394317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.394478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.394496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.399921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.400041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.400058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.405154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.405271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.405288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.410922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.411085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.411103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.416506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.416639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.416658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.421978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.422092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.422110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.427968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.428136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.428153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.433484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.433613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.433635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.439004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.439162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.219 [2024-04-24 10:28:00.439179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.219 [2024-04-24 10:28:00.445220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.219 [2024-04-24 10:28:00.445333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.445350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.220 [2024-04-24 10:28:00.449864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.220 [2024-04-24 10:28:00.449973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.449991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.220 [2024-04-24 10:28:00.454498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.220 [2024-04-24 10:28:00.454660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.454677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.220 [2024-04-24 10:28:00.459034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.220 [2024-04-24 10:28:00.459194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.459212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.220 [2024-04-24 10:28:00.463770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.220 [2024-04-24 10:28:00.463841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.463859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.220 [2024-04-24 10:28:00.468404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.220 [2024-04-24 10:28:00.468557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.468575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.220 [2024-04-24 10:28:00.473254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.220 [2024-04-24 10:28:00.473352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.473369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.220 [2024-04-24 10:28:00.478667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.220 [2024-04-24 10:28:00.478883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.478902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.220 [2024-04-24 10:28:00.485271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.220 [2024-04-24 10:28:00.485439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.485458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.220 [2024-04-24 10:28:00.492108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.220 [2024-04-24 10:28:00.492226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.220 [2024-04-24 10:28:00.492244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.480 [2024-04-24 10:28:00.499527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.480 [2024-04-24 10:28:00.499745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.480 [2024-04-24 10:28:00.499764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.480 [2024-04-24 10:28:00.507309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.480 [2024-04-24 10:28:00.507500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.480 [2024-04-24 10:28:00.507517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.480 [2024-04-24 10:28:00.515206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.480 [2024-04-24 10:28:00.515429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.480 [2024-04-24 10:28:00.515448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.480 [2024-04-24 10:28:00.522596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.480 [2024-04-24 10:28:00.522778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.480 [2024-04-24 10:28:00.522796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.480 [2024-04-24 10:28:00.529381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.480 [2024-04-24 10:28:00.529489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.480 [2024-04-24 10:28:00.529507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.480 [2024-04-24 10:28:00.536617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.480 [2024-04-24 10:28:00.536836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.536855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.544582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.544739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.544756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.552522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.552656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.552674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.560505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.560715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.560734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.568164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.568328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.568345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.574999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.575164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.575182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.581088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.581206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.581224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.586573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.586711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.586729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.592357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.592518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.592536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.597362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.597459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.597481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.602091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.602222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.602239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.606751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.606963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.606982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.611312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.611409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.611427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.616490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.616672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.616690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.621040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.621167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.621184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.625561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.625645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.625663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.630092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.630278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.630295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.634831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.634921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.634939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.639615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.639733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.639750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.644804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.645000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.645017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.649441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.649541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.649560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.654036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.654200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.654218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.658516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.658699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.658717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.663275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.663384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.663402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.667777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.667963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.667981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.672300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.672418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.672438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.677123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.677245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.481 [2024-04-24 10:28:00.677263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.481 [2024-04-24 10:28:00.682115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.481 [2024-04-24 10:28:00.682338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.682358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.686555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.686694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.686712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.691246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.691484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.691503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.695799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.695928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.695946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.701805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.701927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.701944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.706628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.706827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.706845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.711287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.711368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.711386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.716160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.716259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.716276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.720776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.720953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.720974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.725415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.725506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.725523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.730172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.730385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.730403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.734729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.734847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.734864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.739620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.739753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.739770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.744371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.744546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.744563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.749271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.749405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.749422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.482 [2024-04-24 10:28:00.753757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24027d0) with pdu=0x2000190fef90 00:32:47.482 [2024-04-24 10:28:00.753913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.482 [2024-04-24 10:28:00.753931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.741 00:32:47.741 Latency(us) 00:32:47.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.741 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:47.741 nvme0n1 : 2.00 5307.28 663.41 0.00 0.00 3010.69 1780.87 13620.09 00:32:47.741 =================================================================================================================== 00:32:47.741 Total : 5307.28 663.41 0.00 0.00 3010.69 1780.87 13620.09 00:32:47.741 0 00:32:47.741 10:28:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:47.741 10:28:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:47.741 10:28:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:47.741 10:28:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:47.741 | .driver_specific 00:32:47.741 | .nvme_error 00:32:47.741 | .status_code 00:32:47.741 | .command_transient_transport_error' 00:32:47.741 10:28:00 -- host/digest.sh@71 -- # (( 342 > 0 )) 00:32:47.741 10:28:00 -- host/digest.sh@73 -- # killprocess 490068 00:32:47.741 10:28:00 -- common/autotest_common.sh@926 -- # '[' -z 490068 ']' 00:32:47.742 10:28:00 -- common/autotest_common.sh@930 -- # kill -0 490068 00:32:47.742 10:28:00 -- common/autotest_common.sh@931 -- # uname 00:32:47.742 10:28:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:47.742 10:28:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 490068 00:32:47.742 10:28:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:47.742 10:28:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:47.742 10:28:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 490068' 00:32:47.742 killing process with pid 490068 00:32:47.742 10:28:00 -- common/autotest_common.sh@945 -- # kill 490068 00:32:47.742 Received shutdown signal, test time was about 2.000000 seconds 00:32:47.742 00:32:47.742 Latency(us) 00:32:47.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.742 =================================================================================================================== 00:32:47.742 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.742 10:28:00 -- common/autotest_common.sh@950 -- # wait 490068 00:32:48.001 10:28:01 -- host/digest.sh@115 -- # killprocess 487914 00:32:48.001 10:28:01 -- common/autotest_common.sh@926 -- # '[' -z 487914 ']' 00:32:48.001 10:28:01 -- common/autotest_common.sh@930 -- # kill -0 487914 00:32:48.001 10:28:01 -- common/autotest_common.sh@931 -- # uname 00:32:48.001 10:28:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:48.001 10:28:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 487914 00:32:48.001 10:28:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:48.001 10:28:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:48.001 10:28:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 487914' 00:32:48.001 killing process with pid 487914 00:32:48.001 10:28:01 -- common/autotest_common.sh@945 -- # kill 487914 00:32:48.001 10:28:01 -- common/autotest_common.sh@950 -- # wait 487914 00:32:48.260 00:32:48.260 real 0m16.830s 00:32:48.260 user 0m32.220s 00:32:48.260 sys 0m4.346s 00:32:48.260 10:28:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.260 10:28:01 -- common/autotest_common.sh@10 -- # set +x 00:32:48.260 ************************************ 00:32:48.260 END TEST nvmf_digest_error 00:32:48.260 ************************************ 00:32:48.260 10:28:01 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:32:48.260 10:28:01 -- host/digest.sh@139 -- # nvmftestfini 00:32:48.260 10:28:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:48.260 10:28:01 -- nvmf/common.sh@116 -- # sync 00:32:48.260 10:28:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:48.260 10:28:01 -- nvmf/common.sh@119 -- # set +e 00:32:48.260 10:28:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:48.260 10:28:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:48.260 rmmod nvme_tcp 00:32:48.260 rmmod nvme_fabrics 00:32:48.260 rmmod nvme_keyring 00:32:48.260 10:28:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:48.519 10:28:01 -- nvmf/common.sh@123 -- # set -e 00:32:48.519 10:28:01 -- nvmf/common.sh@124 -- # return 0 00:32:48.519 10:28:01 -- nvmf/common.sh@477 -- # '[' -n 487914 ']' 00:32:48.519 10:28:01 -- nvmf/common.sh@478 -- # killprocess 487914 00:32:48.519 10:28:01 -- common/autotest_common.sh@926 -- # '[' -z 487914 ']' 00:32:48.519 10:28:01 -- common/autotest_common.sh@930 -- # kill -0 487914 00:32:48.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (487914) - No such process 00:32:48.519 10:28:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 487914 is not found' 00:32:48.519 Process with pid 487914 is not found 00:32:48.519 10:28:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:48.519 10:28:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:48.519 10:28:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:48.519 10:28:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:48.519 10:28:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:48.519 10:28:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.519 10:28:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:48.519 10:28:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.513 10:28:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:50.513 00:32:50.513 real 0m41.028s 00:32:50.513 user 1m6.051s 00:32:50.513 sys 0m12.396s 00:32:50.513 10:28:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:50.513 10:28:03 -- common/autotest_common.sh@10 -- # set +x 00:32:50.513 ************************************ 00:32:50.513 END TEST nvmf_digest 00:32:50.513 ************************************ 00:32:50.513 10:28:03 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:32:50.513 10:28:03 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:32:50.513 10:28:03 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:32:50.513 10:28:03 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:50.513 10:28:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:50.513 10:28:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:50.513 10:28:03 -- common/autotest_common.sh@10 -- # set +x 00:32:50.513 ************************************ 00:32:50.513 START TEST nvmf_bdevperf 00:32:50.513 ************************************ 00:32:50.513 10:28:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:50.513 * Looking for test storage... 00:32:50.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:50.513 10:28:03 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.513 10:28:03 -- nvmf/common.sh@7 -- # uname -s 00:32:50.513 10:28:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.513 10:28:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.513 10:28:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.513 10:28:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.513 10:28:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.513 10:28:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.513 10:28:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.513 10:28:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.513 10:28:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.513 10:28:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.513 10:28:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:50.514 10:28:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:50.514 10:28:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.514 10:28:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.514 10:28:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.514 10:28:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.514 10:28:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.514 10:28:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.514 10:28:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.514 10:28:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.514 10:28:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.514 10:28:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.514 10:28:03 -- paths/export.sh@5 -- # export PATH 00:32:50.514 10:28:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.514 10:28:03 -- nvmf/common.sh@46 -- # : 0 00:32:50.514 10:28:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:50.514 10:28:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:50.514 10:28:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:50.514 10:28:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.514 10:28:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.514 10:28:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:50.514 10:28:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:50.514 10:28:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:50.514 10:28:03 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:50.514 10:28:03 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:50.514 10:28:03 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:50.514 10:28:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:50.514 10:28:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.514 10:28:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:50.514 10:28:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:50.514 10:28:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:50.514 10:28:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.514 10:28:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:50.514 10:28:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.514 10:28:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:50.514 10:28:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:50.514 10:28:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:50.514 10:28:03 -- common/autotest_common.sh@10 -- # set +x 00:32:55.788 10:28:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:55.788 10:28:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:55.788 10:28:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:55.788 10:28:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:55.788 10:28:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:55.788 10:28:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:55.788 10:28:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:55.788 10:28:08 -- nvmf/common.sh@294 -- # net_devs=() 00:32:55.788 10:28:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:55.788 10:28:08 -- nvmf/common.sh@295 -- # e810=() 00:32:55.788 10:28:08 -- nvmf/common.sh@295 -- # local -ga e810 00:32:55.788 10:28:08 -- nvmf/common.sh@296 -- # x722=() 00:32:55.788 10:28:08 -- nvmf/common.sh@296 -- # local -ga x722 00:32:55.788 10:28:08 -- nvmf/common.sh@297 -- # mlx=() 00:32:55.788 10:28:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:55.788 10:28:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.788 10:28:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:55.788 10:28:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:55.788 10:28:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:55.788 10:28:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:55.788 10:28:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:55.788 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:55.788 10:28:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:55.788 10:28:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:55.788 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:55.788 10:28:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:55.788 10:28:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:55.788 10:28:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.788 10:28:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:55.788 10:28:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.788 10:28:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:55.788 Found net devices under 0000:86:00.0: cvl_0_0 00:32:55.788 10:28:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.788 10:28:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:55.788 10:28:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.788 10:28:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:55.788 10:28:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.788 10:28:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:55.788 Found net devices under 0000:86:00.1: cvl_0_1 00:32:55.788 10:28:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.788 10:28:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:55.788 10:28:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:55.788 10:28:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:55.788 10:28:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:55.788 10:28:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.788 10:28:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.788 10:28:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.788 10:28:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:55.788 10:28:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.788 10:28:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.788 10:28:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:55.788 10:28:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.788 10:28:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.788 10:28:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:55.788 10:28:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:55.788 10:28:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.788 10:28:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.788 10:28:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.788 10:28:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.788 10:28:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:55.788 10:28:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.788 10:28:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.788 10:28:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.788 10:28:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:55.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:32:55.788 00:32:55.788 --- 10.0.0.2 ping statistics --- 00:32:55.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.788 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:55.788 10:28:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:32:55.788 00:32:55.788 --- 10.0.0.1 ping statistics --- 00:32:55.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.788 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:32:55.788 10:28:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.788 10:28:08 -- nvmf/common.sh@410 -- # return 0 00:32:55.788 10:28:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:55.789 10:28:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.789 10:28:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:55.789 10:28:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:55.789 10:28:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.789 10:28:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:55.789 10:28:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:55.789 10:28:08 -- host/bdevperf.sh@25 -- # tgt_init 00:32:55.789 10:28:08 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:55.789 10:28:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:55.789 10:28:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:55.789 10:28:08 -- common/autotest_common.sh@10 -- # set +x 00:32:55.789 10:28:08 -- nvmf/common.sh@469 -- # nvmfpid=494101 00:32:55.789 10:28:08 -- nvmf/common.sh@470 -- # waitforlisten 494101 00:32:55.789 10:28:08 -- common/autotest_common.sh@819 -- # '[' -z 494101 ']' 00:32:55.789 10:28:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.789 10:28:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:55.789 10:28:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.789 10:28:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:55.789 10:28:08 -- common/autotest_common.sh@10 -- # set +x 00:32:55.789 10:28:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:55.789 [2024-04-24 10:28:08.806760] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:55.789 [2024-04-24 10:28:08.806804] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.789 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.789 [2024-04-24 10:28:08.863964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:55.789 [2024-04-24 10:28:08.941395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:55.789 [2024-04-24 10:28:08.941503] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.789 [2024-04-24 10:28:08.941511] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.789 [2024-04-24 10:28:08.941518] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.789 [2024-04-24 10:28:08.941556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.789 [2024-04-24 10:28:08.941642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:55.789 [2024-04-24 10:28:08.941643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.356 10:28:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:56.356 10:28:09 -- common/autotest_common.sh@852 -- # return 0 00:32:56.356 10:28:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:56.356 10:28:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:56.356 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:32:56.615 10:28:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.615 10:28:09 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:56.615 10:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.615 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:32:56.615 [2024-04-24 10:28:09.645227] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.615 10:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.615 10:28:09 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:56.615 10:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.615 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:32:56.615 Malloc0 00:32:56.615 10:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.615 10:28:09 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:56.615 10:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.615 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:32:56.615 10:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.615 10:28:09 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:56.615 10:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.615 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:32:56.615 10:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.615 10:28:09 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.615 10:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.615 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:32:56.615 [2024-04-24 10:28:09.717282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.615 10:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.615 10:28:09 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:56.615 10:28:09 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:56.616 10:28:09 -- nvmf/common.sh@520 -- # config=() 00:32:56.616 10:28:09 -- nvmf/common.sh@520 -- # local subsystem config 00:32:56.616 10:28:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:56.616 10:28:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:56.616 { 00:32:56.616 "params": { 00:32:56.616 "name": "Nvme$subsystem", 00:32:56.616 "trtype": "$TEST_TRANSPORT", 00:32:56.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:56.616 "adrfam": "ipv4", 00:32:56.616 "trsvcid": "$NVMF_PORT", 00:32:56.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:56.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:56.616 "hdgst": ${hdgst:-false}, 00:32:56.616 "ddgst": ${ddgst:-false} 00:32:56.616 }, 00:32:56.616 "method": "bdev_nvme_attach_controller" 00:32:56.616 } 00:32:56.616 EOF 00:32:56.616 )") 00:32:56.616 10:28:09 -- nvmf/common.sh@542 -- # cat 00:32:56.616 10:28:09 -- nvmf/common.sh@544 -- # jq . 00:32:56.616 10:28:09 -- nvmf/common.sh@545 -- # IFS=, 00:32:56.616 10:28:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:56.616 "params": { 00:32:56.616 "name": "Nvme1", 00:32:56.616 "trtype": "tcp", 00:32:56.616 "traddr": "10.0.0.2", 00:32:56.616 "adrfam": "ipv4", 00:32:56.616 "trsvcid": "4420", 00:32:56.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:56.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:56.616 "hdgst": false, 00:32:56.616 "ddgst": false 00:32:56.616 }, 00:32:56.616 "method": "bdev_nvme_attach_controller" 00:32:56.616 }' 00:32:56.616 [2024-04-24 10:28:09.764462] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:56.616 [2024-04-24 10:28:09.764507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494353 ] 00:32:56.616 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.616 [2024-04-24 10:28:09.818284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.616 [2024-04-24 10:28:09.889868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.184 Running I/O for 1 seconds... 00:32:58.122 00:32:58.122 Latency(us) 00:32:58.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.122 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:58.122 Verification LBA range: start 0x0 length 0x4000 00:32:58.122 Nvme1n1 : 1.01 16780.71 65.55 0.00 0.00 7598.08 990.16 13905.03 00:32:58.122 =================================================================================================================== 00:32:58.122 Total : 16780.71 65.55 0.00 0.00 7598.08 990.16 13905.03 00:32:58.381 10:28:11 -- host/bdevperf.sh@30 -- # bdevperfpid=494593 00:32:58.381 10:28:11 -- host/bdevperf.sh@32 -- # sleep 3 00:32:58.381 10:28:11 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:58.381 10:28:11 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:58.381 10:28:11 -- nvmf/common.sh@520 -- # config=() 00:32:58.381 10:28:11 -- nvmf/common.sh@520 -- # local subsystem config 00:32:58.381 10:28:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:58.381 10:28:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:58.381 { 00:32:58.381 "params": { 00:32:58.381 "name": "Nvme$subsystem", 00:32:58.381 "trtype": "$TEST_TRANSPORT", 00:32:58.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.381 "adrfam": "ipv4", 00:32:58.381 "trsvcid": "$NVMF_PORT", 00:32:58.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.381 "hdgst": ${hdgst:-false}, 00:32:58.381 "ddgst": ${ddgst:-false} 00:32:58.381 }, 00:32:58.381 "method": "bdev_nvme_attach_controller" 00:32:58.381 } 00:32:58.381 EOF 00:32:58.381 )") 00:32:58.381 10:28:11 -- nvmf/common.sh@542 -- # cat 00:32:58.381 10:28:11 -- nvmf/common.sh@544 -- # jq . 00:32:58.381 10:28:11 -- nvmf/common.sh@545 -- # IFS=, 00:32:58.381 10:28:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:58.381 "params": { 00:32:58.381 "name": "Nvme1", 00:32:58.381 "trtype": "tcp", 00:32:58.381 "traddr": "10.0.0.2", 00:32:58.381 "adrfam": "ipv4", 00:32:58.381 "trsvcid": "4420", 00:32:58.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.381 "hdgst": false, 00:32:58.381 "ddgst": false 00:32:58.381 }, 00:32:58.381 "method": "bdev_nvme_attach_controller" 00:32:58.381 }' 00:32:58.381 [2024-04-24 10:28:11.465640] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:58.381 [2024-04-24 10:28:11.465688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494593 ] 00:32:58.381 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.381 [2024-04-24 10:28:11.519424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.381 [2024-04-24 10:28:11.590592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.640 Running I/O for 15 seconds... 00:33:01.177 10:28:14 -- host/bdevperf.sh@33 -- # kill -9 494101 00:33:01.177 10:28:14 -- host/bdevperf.sh@35 -- # sleep 3 00:33:01.177 [2024-04-24 10:28:14.438894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.177 [2024-04-24 10:28:14.438931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.177 [2024-04-24 10:28:14.438948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.177 [2024-04-24 10:28:14.438957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.177 [2024-04-24 10:28:14.438966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.177 [2024-04-24 10:28:14.438975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.177 [2024-04-24 10:28:14.438984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.177 [2024-04-24 10:28:14.438990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.177 [2024-04-24 10:28:14.439000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.177 [2024-04-24 10:28:14.439013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.177 [2024-04-24 10:28:14.439023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.177 [2024-04-24 10:28:14.439031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.178 [2024-04-24 10:28:14.439784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.178 [2024-04-24 10:28:14.439912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.178 [2024-04-24 10:28:14.439918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.439926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.439934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.439944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.439951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.439959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.439966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.439974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.439982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.439989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.439997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.179 [2024-04-24 10:28:14.440537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.179 [2024-04-24 10:28:14.440612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.179 [2024-04-24 10:28:14.440620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.180 [2024-04-24 10:28:14.440689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.180 [2024-04-24 10:28:14.440777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:01.180 [2024-04-24 10:28:14.440809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.180 [2024-04-24 10:28:14.440914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x829b80 is same with the state(5) to be set 00:33:01.180 [2024-04-24 10:28:14.440929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:01.180 [2024-04-24 10:28:14.440934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:01.180 [2024-04-24 10:28:14.440940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83080 len:8 PRP1 0x0 PRP2 0x0 00:33:01.180 [2024-04-24 10:28:14.440947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.180 [2024-04-24 10:28:14.440990] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x829b80 was disconnected and freed. reset controller. 00:33:01.180 [2024-04-24 10:28:14.443056] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.180 [2024-04-24 10:28:14.443111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.180 [2024-04-24 10:28:14.443682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.180 [2024-04-24 10:28:14.443938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.180 [2024-04-24 10:28:14.443970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.180 [2024-04-24 10:28:14.443992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.180 [2024-04-24 10:28:14.444438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.180 [2024-04-24 10:28:14.444566] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.180 [2024-04-24 10:28:14.444574] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.180 [2024-04-24 10:28:14.444582] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.180 [2024-04-24 10:28:14.446507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.441 [2024-04-24 10:28:14.455354] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.441 [2024-04-24 10:28:14.455820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.441 [2024-04-24 10:28:14.456179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.441 [2024-04-24 10:28:14.456215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.441 [2024-04-24 10:28:14.456238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.441 [2024-04-24 10:28:14.456521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.441 [2024-04-24 10:28:14.456897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.441 [2024-04-24 10:28:14.456906] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.441 [2024-04-24 10:28:14.456913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.441 [2024-04-24 10:28:14.458681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.441 [2024-04-24 10:28:14.467316] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.441 [2024-04-24 10:28:14.467739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.441 [2024-04-24 10:28:14.468094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.441 [2024-04-24 10:28:14.468127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.441 [2024-04-24 10:28:14.468150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.441 [2024-04-24 10:28:14.468482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.441 [2024-04-24 10:28:14.468855] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.441 [2024-04-24 10:28:14.468864] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.441 [2024-04-24 10:28:14.468871] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.441 [2024-04-24 10:28:14.470500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.441 [2024-04-24 10:28:14.479228] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.441 [2024-04-24 10:28:14.479657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.441 [2024-04-24 10:28:14.480007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.480050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.480090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.480472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.442 [2024-04-24 10:28:14.480852] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.442 [2024-04-24 10:28:14.480865] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.442 [2024-04-24 10:28:14.480874] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.442 [2024-04-24 10:28:14.483329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.442 [2024-04-24 10:28:14.492042] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.442 [2024-04-24 10:28:14.492491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.492880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.492912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.492934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.493157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.442 [2024-04-24 10:28:14.493270] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.442 [2024-04-24 10:28:14.493279] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.442 [2024-04-24 10:28:14.493286] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.442 [2024-04-24 10:28:14.495011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.442 [2024-04-24 10:28:14.503854] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.442 [2024-04-24 10:28:14.504282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.504558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.504589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.504612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.505041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.442 [2024-04-24 10:28:14.505189] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.442 [2024-04-24 10:28:14.505198] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.442 [2024-04-24 10:28:14.505205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.442 [2024-04-24 10:28:14.506786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.442 [2024-04-24 10:28:14.515667] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.442 [2024-04-24 10:28:14.516027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.516415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.516447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.516476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.516906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.442 [2024-04-24 10:28:14.517145] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.442 [2024-04-24 10:28:14.517155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.442 [2024-04-24 10:28:14.517161] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.442 [2024-04-24 10:28:14.518864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.442 [2024-04-24 10:28:14.527357] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.442 [2024-04-24 10:28:14.527762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.528123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.528156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.528179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.528510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.442 [2024-04-24 10:28:14.528900] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.442 [2024-04-24 10:28:14.528910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.442 [2024-04-24 10:28:14.528916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.442 [2024-04-24 10:28:14.530596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.442 [2024-04-24 10:28:14.539183] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.442 [2024-04-24 10:28:14.539536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.539883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.539925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.539933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.540041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.442 [2024-04-24 10:28:14.540171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.442 [2024-04-24 10:28:14.540179] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.442 [2024-04-24 10:28:14.540186] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.442 [2024-04-24 10:28:14.541850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.442 [2024-04-24 10:28:14.550881] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.442 [2024-04-24 10:28:14.551305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.551649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.551680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.551702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.551931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.442 [2024-04-24 10:28:14.552130] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.442 [2024-04-24 10:28:14.552143] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.442 [2024-04-24 10:28:14.552153] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.442 [2024-04-24 10:28:14.554775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.442 [2024-04-24 10:28:14.563207] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.442 [2024-04-24 10:28:14.563574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.563823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.563853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.563875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.564223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.442 [2024-04-24 10:28:14.564707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.442 [2024-04-24 10:28:14.564731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.442 [2024-04-24 10:28:14.564752] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.442 [2024-04-24 10:28:14.566733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.442 [2024-04-24 10:28:14.574919] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.442 [2024-04-24 10:28:14.575286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.575512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.575543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.575564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.575894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.442 [2024-04-24 10:28:14.576056] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.442 [2024-04-24 10:28:14.576065] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.442 [2024-04-24 10:28:14.576079] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.442 [2024-04-24 10:28:14.577852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.442 [2024-04-24 10:28:14.586719] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.442 [2024-04-24 10:28:14.587123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.587408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.442 [2024-04-24 10:28:14.587439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.442 [2024-04-24 10:28:14.587461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.442 [2024-04-24 10:28:14.587860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.587959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.587968] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.587974] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.589642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.598561] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.599018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.599426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.599458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.443 [2024-04-24 10:28:14.599480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.443 [2024-04-24 10:28:14.600058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.600248] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.600257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.600263] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.601788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.610460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.610914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.611154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.611188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.443 [2024-04-24 10:28:14.611210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.443 [2024-04-24 10:28:14.611491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.611973] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.611998] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.612018] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.614724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.623051] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.623401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.623629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.623659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.443 [2024-04-24 10:28:14.623681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.443 [2024-04-24 10:28:14.624126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.624229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.624243] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.624249] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.625901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.634850] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.635316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.635659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.635689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.443 [2024-04-24 10:28:14.635711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.443 [2024-04-24 10:28:14.636105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.636401] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.636411] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.636417] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.638066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.646704] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.647115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.647442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.647473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.443 [2024-04-24 10:28:14.647494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.443 [2024-04-24 10:28:14.647774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.648056] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.648065] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.648077] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.649725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.658527] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.658955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.659306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.659339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.443 [2024-04-24 10:28:14.659362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.443 [2024-04-24 10:28:14.659542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.659623] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.659632] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.659641] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.661317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.670334] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.670753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.671047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.671094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.443 [2024-04-24 10:28:14.671117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.443 [2024-04-24 10:28:14.671448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.671780] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.671805] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.671826] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.673742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.682194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.682615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.683004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.683034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.443 [2024-04-24 10:28:14.683057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.443 [2024-04-24 10:28:14.683219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.683361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.683374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.683384] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.686158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.694453] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.694886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.695197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.443 [2024-04-24 10:28:14.695231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.443 [2024-04-24 10:28:14.695253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.443 [2024-04-24 10:28:14.695533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.443 [2024-04-24 10:28:14.695864] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.443 [2024-04-24 10:28:14.695890] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.443 [2024-04-24 10:28:14.695910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.443 [2024-04-24 10:28:14.697809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.443 [2024-04-24 10:28:14.706431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.443 [2024-04-24 10:28:14.706884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.444 [2024-04-24 10:28:14.707138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.444 [2024-04-24 10:28:14.707172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.444 [2024-04-24 10:28:14.707194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.444 [2024-04-24 10:28:14.707514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.444 [2024-04-24 10:28:14.707600] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.444 [2024-04-24 10:28:14.707609] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.444 [2024-04-24 10:28:14.707615] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.444 [2024-04-24 10:28:14.709468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.705 [2024-04-24 10:28:14.718384] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.705 [2024-04-24 10:28:14.718810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.719121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.719154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.705 [2024-04-24 10:28:14.719176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.705 [2024-04-24 10:28:14.719512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.705 [2024-04-24 10:28:14.719630] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.705 [2024-04-24 10:28:14.719640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.705 [2024-04-24 10:28:14.719647] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.705 [2024-04-24 10:28:14.721420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.705 [2024-04-24 10:28:14.730258] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.705 [2024-04-24 10:28:14.730668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.730890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.730931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.705 [2024-04-24 10:28:14.730953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.705 [2024-04-24 10:28:14.731394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.705 [2024-04-24 10:28:14.731528] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.705 [2024-04-24 10:28:14.731538] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.705 [2024-04-24 10:28:14.731544] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.705 [2024-04-24 10:28:14.733238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.705 [2024-04-24 10:28:14.742188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.705 [2024-04-24 10:28:14.742591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.742959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.742989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.705 [2024-04-24 10:28:14.743009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.705 [2024-04-24 10:28:14.743456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.705 [2024-04-24 10:28:14.743687] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.705 [2024-04-24 10:28:14.743700] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.705 [2024-04-24 10:28:14.743709] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.705 [2024-04-24 10:28:14.746185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.705 [2024-04-24 10:28:14.754644] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.705 [2024-04-24 10:28:14.755043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.755352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.755384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.705 [2024-04-24 10:28:14.755407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.705 [2024-04-24 10:28:14.755579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.705 [2024-04-24 10:28:14.755677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.705 [2024-04-24 10:28:14.755687] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.705 [2024-04-24 10:28:14.755693] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.705 [2024-04-24 10:28:14.757450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.705 [2024-04-24 10:28:14.766368] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.705 [2024-04-24 10:28:14.766769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.766931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.766942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.705 [2024-04-24 10:28:14.766949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.705 [2024-04-24 10:28:14.767044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.705 [2024-04-24 10:28:14.767144] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.705 [2024-04-24 10:28:14.767153] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.705 [2024-04-24 10:28:14.767160] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.705 [2024-04-24 10:28:14.768904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.705 [2024-04-24 10:28:14.778031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.705 [2024-04-24 10:28:14.778467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.778831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.778864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.705 [2024-04-24 10:28:14.778887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.705 [2024-04-24 10:28:14.779335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.705 [2024-04-24 10:28:14.779597] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.705 [2024-04-24 10:28:14.779606] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.705 [2024-04-24 10:28:14.779612] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.705 [2024-04-24 10:28:14.781194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.705 [2024-04-24 10:28:14.789911] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.705 [2024-04-24 10:28:14.790330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.790540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.790551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.705 [2024-04-24 10:28:14.790558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.705 [2024-04-24 10:28:14.790680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.705 [2024-04-24 10:28:14.790775] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.705 [2024-04-24 10:28:14.790784] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.705 [2024-04-24 10:28:14.790790] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.705 [2024-04-24 10:28:14.792470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.705 [2024-04-24 10:28:14.801867] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.705 [2024-04-24 10:28:14.802254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.802559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.802591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.705 [2024-04-24 10:28:14.802612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.705 [2024-04-24 10:28:14.802993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.705 [2024-04-24 10:28:14.803345] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.705 [2024-04-24 10:28:14.803355] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.705 [2024-04-24 10:28:14.803361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.705 [2024-04-24 10:28:14.805052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.705 [2024-04-24 10:28:14.813586] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.705 [2024-04-24 10:28:14.814019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.814394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.705 [2024-04-24 10:28:14.814434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.705 [2024-04-24 10:28:14.814456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.705 [2024-04-24 10:28:14.814602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.705 [2024-04-24 10:28:14.814710] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.705 [2024-04-24 10:28:14.814718] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.705 [2024-04-24 10:28:14.814724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.816952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.825719] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.826190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.826472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.826504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.706 [2024-04-24 10:28:14.826526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.706 [2024-04-24 10:28:14.826742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.706 [2024-04-24 10:28:14.826872] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.706 [2024-04-24 10:28:14.826882] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.706 [2024-04-24 10:28:14.826888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.828725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.837549] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.837992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.838303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.838336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.706 [2024-04-24 10:28:14.838358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.706 [2024-04-24 10:28:14.838540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.706 [2024-04-24 10:28:14.838663] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.706 [2024-04-24 10:28:14.838672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.706 [2024-04-24 10:28:14.838679] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.840471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.849380] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.849793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.850134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.850167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.706 [2024-04-24 10:28:14.850197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.706 [2024-04-24 10:28:14.850726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.706 [2024-04-24 10:28:14.850948] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.706 [2024-04-24 10:28:14.850958] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.706 [2024-04-24 10:28:14.850963] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.852660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.861242] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.861678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.861899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.861911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.706 [2024-04-24 10:28:14.861919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.706 [2024-04-24 10:28:14.861989] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.706 [2024-04-24 10:28:14.862111] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.706 [2024-04-24 10:28:14.862121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.706 [2024-04-24 10:28:14.862127] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.863715] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.873011] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.873422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.873686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.873718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.706 [2024-04-24 10:28:14.873741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.706 [2024-04-24 10:28:14.874284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.706 [2024-04-24 10:28:14.874500] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.706 [2024-04-24 10:28:14.874510] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.706 [2024-04-24 10:28:14.874516] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.876114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.884826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.885181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.885492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.885524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.706 [2024-04-24 10:28:14.885545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.706 [2024-04-24 10:28:14.885802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.706 [2024-04-24 10:28:14.885870] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.706 [2024-04-24 10:28:14.885879] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.706 [2024-04-24 10:28:14.885885] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.887650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.896696] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.897116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.897366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.897397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.706 [2024-04-24 10:28:14.897419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.706 [2024-04-24 10:28:14.897848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.706 [2024-04-24 10:28:14.897956] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.706 [2024-04-24 10:28:14.897964] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.706 [2024-04-24 10:28:14.897970] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.899571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.908586] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.909036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.909391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.909425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.706 [2024-04-24 10:28:14.909447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.706 [2024-04-24 10:28:14.909697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.706 [2024-04-24 10:28:14.909819] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.706 [2024-04-24 10:28:14.909828] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.706 [2024-04-24 10:28:14.909834] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.911545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.920457] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.920869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.921188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.921222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.706 [2024-04-24 10:28:14.921244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.706 [2024-04-24 10:28:14.921624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.706 [2024-04-24 10:28:14.921951] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.706 [2024-04-24 10:28:14.921960] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.706 [2024-04-24 10:28:14.921966] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.706 [2024-04-24 10:28:14.923617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.706 [2024-04-24 10:28:14.932213] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.706 [2024-04-24 10:28:14.932616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.932935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.706 [2024-04-24 10:28:14.932966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.707 [2024-04-24 10:28:14.932988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.707 [2024-04-24 10:28:14.933379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.707 [2024-04-24 10:28:14.933813] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.707 [2024-04-24 10:28:14.933838] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.707 [2024-04-24 10:28:14.933858] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.707 [2024-04-24 10:28:14.935716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.707 [2024-04-24 10:28:14.944119] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.707 [2024-04-24 10:28:14.944548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-04-24 10:28:14.944769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-04-24 10:28:14.944779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.707 [2024-04-24 10:28:14.944786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.707 [2024-04-24 10:28:14.944893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.707 [2024-04-24 10:28:14.945043] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.707 [2024-04-24 10:28:14.945051] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.707 [2024-04-24 10:28:14.945058] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.707 [2024-04-24 10:28:14.946750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.707 [2024-04-24 10:28:14.956146] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.707 [2024-04-24 10:28:14.956560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-04-24 10:28:14.956824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-04-24 10:28:14.956855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.707 [2024-04-24 10:28:14.956876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.707 [2024-04-24 10:28:14.957050] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.707 [2024-04-24 10:28:14.957186] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.707 [2024-04-24 10:28:14.957197] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.707 [2024-04-24 10:28:14.957206] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.707 [2024-04-24 10:28:14.958833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.707 [2024-04-24 10:28:14.967992] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.707 [2024-04-24 10:28:14.968442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-04-24 10:28:14.968757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-04-24 10:28:14.968787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.707 [2024-04-24 10:28:14.968809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.707 [2024-04-24 10:28:14.969153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.707 [2024-04-24 10:28:14.969248] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.707 [2024-04-24 10:28:14.969257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.707 [2024-04-24 10:28:14.969263] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.707 [2024-04-24 10:28:14.971116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.707 [2024-04-24 10:28:14.980030] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.707 [2024-04-24 10:28:14.980463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-04-24 10:28:14.980779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.707 [2024-04-24 10:28:14.980809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.707 [2024-04-24 10:28:14.980831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.707 [2024-04-24 10:28:14.981110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.707 [2024-04-24 10:28:14.981199] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.707 [2024-04-24 10:28:14.981208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.707 [2024-04-24 10:28:14.981215] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.968 [2024-04-24 10:28:14.982907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.968 [2024-04-24 10:28:14.991753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.968 [2024-04-24 10:28:14.992150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.968 [2024-04-24 10:28:14.992447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.968 [2024-04-24 10:28:14.992478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.968 [2024-04-24 10:28:14.992500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.968 [2024-04-24 10:28:14.992928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.968 [2024-04-24 10:28:14.993085] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:14.993095] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:14.993121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:14.994806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.003467] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.969 [2024-04-24 10:28:15.003870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.004087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.004118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.969 [2024-04-24 10:28:15.004139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.969 [2024-04-24 10:28:15.004318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.969 [2024-04-24 10:28:15.004400] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:15.004409] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:15.004415] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:15.006064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.015338] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.969 [2024-04-24 10:28:15.015758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.016029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.016074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.969 [2024-04-24 10:28:15.016082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.969 [2024-04-24 10:28:15.016190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.969 [2024-04-24 10:28:15.016312] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:15.016322] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:15.016328] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:15.017851] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.027259] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.969 [2024-04-24 10:28:15.027695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.027973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.028004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.969 [2024-04-24 10:28:15.028027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.969 [2024-04-24 10:28:15.028243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.969 [2024-04-24 10:28:15.028339] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:15.028348] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:15.028354] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:15.029988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.039236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.969 [2024-04-24 10:28:15.039527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.039842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.039873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.969 [2024-04-24 10:28:15.039895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.969 [2024-04-24 10:28:15.040252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.969 [2024-04-24 10:28:15.040368] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:15.040377] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:15.040383] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:15.042233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.051143] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.969 [2024-04-24 10:28:15.051598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.051893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.051924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.969 [2024-04-24 10:28:15.051946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.969 [2024-04-24 10:28:15.052391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.969 [2024-04-24 10:28:15.052493] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:15.052502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:15.052509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:15.054391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.063175] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.969 [2024-04-24 10:28:15.063553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.063846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.063857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.969 [2024-04-24 10:28:15.063866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.969 [2024-04-24 10:28:15.063954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.969 [2024-04-24 10:28:15.064093] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:15.064104] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:15.064110] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:15.065847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.075035] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.969 [2024-04-24 10:28:15.075357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.075580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.075592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.969 [2024-04-24 10:28:15.075599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.969 [2024-04-24 10:28:15.075671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.969 [2024-04-24 10:28:15.075804] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:15.075812] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:15.075819] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:15.077531] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.087138] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.969 [2024-04-24 10:28:15.087593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.087768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.087779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.969 [2024-04-24 10:28:15.087787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.969 [2024-04-24 10:28:15.087939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.969 [2024-04-24 10:28:15.088044] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:15.088054] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:15.088061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:15.089903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.099293] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.969 [2024-04-24 10:28:15.099676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.099969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.969 [2024-04-24 10:28:15.099981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.969 [2024-04-24 10:28:15.099989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.969 [2024-04-24 10:28:15.100142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.969 [2024-04-24 10:28:15.100295] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.969 [2024-04-24 10:28:15.100305] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.969 [2024-04-24 10:28:15.100312] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.969 [2024-04-24 10:28:15.102286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.969 [2024-04-24 10:28:15.111441] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.111865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.112142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.112154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.112162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.112284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.970 [2024-04-24 10:28:15.112421] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.970 [2024-04-24 10:28:15.112431] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.970 [2024-04-24 10:28:15.112437] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.970 [2024-04-24 10:28:15.114237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.970 [2024-04-24 10:28:15.123548] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.123989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.124287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.124299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.124307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.124427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.970 [2024-04-24 10:28:15.124548] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.970 [2024-04-24 10:28:15.124558] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.970 [2024-04-24 10:28:15.124565] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.970 [2024-04-24 10:28:15.126360] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.970 [2024-04-24 10:28:15.135563] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.135916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.136881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.136905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.136914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.137027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.970 [2024-04-24 10:28:15.137172] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.970 [2024-04-24 10:28:15.137182] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.970 [2024-04-24 10:28:15.137189] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.970 [2024-04-24 10:28:15.139080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.970 [2024-04-24 10:28:15.147541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.147945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.148167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.148200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.148231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.148453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.970 [2024-04-24 10:28:15.148572] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.970 [2024-04-24 10:28:15.148582] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.970 [2024-04-24 10:28:15.148588] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.970 [2024-04-24 10:28:15.150323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.970 [2024-04-24 10:28:15.159626] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.159966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.160192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.160204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.160212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.160315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.970 [2024-04-24 10:28:15.160433] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.970 [2024-04-24 10:28:15.160442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.970 [2024-04-24 10:28:15.160450] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.970 [2024-04-24 10:28:15.162255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.970 [2024-04-24 10:28:15.171440] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.171814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.172048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.172091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.172115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.172446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.970 [2024-04-24 10:28:15.172594] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.970 [2024-04-24 10:28:15.172604] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.970 [2024-04-24 10:28:15.172611] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.970 [2024-04-24 10:28:15.174360] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.970 [2024-04-24 10:28:15.183211] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.183572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.183792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.183822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.183843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.184197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.970 [2024-04-24 10:28:15.184483] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.970 [2024-04-24 10:28:15.184492] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.970 [2024-04-24 10:28:15.184499] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.970 [2024-04-24 10:28:15.186128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.970 [2024-04-24 10:28:15.195016] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.195285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.195556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.195586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.195609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.195940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.970 [2024-04-24 10:28:15.196278] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.970 [2024-04-24 10:28:15.196288] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.970 [2024-04-24 10:28:15.196294] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.970 [2024-04-24 10:28:15.198127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.970 [2024-04-24 10:28:15.206901] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.207690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.207928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.207941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.207949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.208049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.970 [2024-04-24 10:28:15.208148] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.970 [2024-04-24 10:28:15.208157] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.970 [2024-04-24 10:28:15.208163] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.970 [2024-04-24 10:28:15.209841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.970 [2024-04-24 10:28:15.218722] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.970 [2024-04-24 10:28:15.219118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.219353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.970 [2024-04-24 10:28:15.219385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.970 [2024-04-24 10:28:15.219407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.970 [2024-04-24 10:28:15.219624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.971 [2024-04-24 10:28:15.219722] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.971 [2024-04-24 10:28:15.219731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.971 [2024-04-24 10:28:15.219737] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.971 [2024-04-24 10:28:15.221283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.971 [2024-04-24 10:28:15.230707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.971 [2024-04-24 10:28:15.231044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.971 [2024-04-24 10:28:15.231284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.971 [2024-04-24 10:28:15.231316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.971 [2024-04-24 10:28:15.231338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.971 [2024-04-24 10:28:15.231684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.971 [2024-04-24 10:28:15.231807] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.971 [2024-04-24 10:28:15.231816] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.971 [2024-04-24 10:28:15.231822] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:01.971 [2024-04-24 10:28:15.233459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:01.971 [2024-04-24 10:28:15.242757] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:01.971 [2024-04-24 10:28:15.243205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.971 [2024-04-24 10:28:15.243425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.971 [2024-04-24 10:28:15.243455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:01.971 [2024-04-24 10:28:15.243478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:01.971 [2024-04-24 10:28:15.243678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:01.971 [2024-04-24 10:28:15.243781] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:01.971 [2024-04-24 10:28:15.243790] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:01.971 [2024-04-24 10:28:15.243798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.232 [2024-04-24 10:28:15.245716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.232 [2024-04-24 10:28:15.254833] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.232 [2024-04-24 10:28:15.255212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.255511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.255542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.232 [2024-04-24 10:28:15.255565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.232 [2024-04-24 10:28:15.255877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.232 [2024-04-24 10:28:15.255977] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.232 [2024-04-24 10:28:15.255990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.232 [2024-04-24 10:28:15.255996] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.232 [2024-04-24 10:28:15.257498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.232 [2024-04-24 10:28:15.266552] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.232 [2024-04-24 10:28:15.266991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.267230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.267262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.232 [2024-04-24 10:28:15.267284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.232 [2024-04-24 10:28:15.267580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.232 [2024-04-24 10:28:15.267675] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.232 [2024-04-24 10:28:15.267684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.232 [2024-04-24 10:28:15.267690] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.232 [2024-04-24 10:28:15.269343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.232 [2024-04-24 10:28:15.278378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.232 [2024-04-24 10:28:15.278634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.278859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.278891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.232 [2024-04-24 10:28:15.278913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.232 [2024-04-24 10:28:15.279356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.232 [2024-04-24 10:28:15.279545] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.232 [2024-04-24 10:28:15.279554] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.232 [2024-04-24 10:28:15.279560] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.232 [2024-04-24 10:28:15.282129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.232 [2024-04-24 10:28:15.290792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.232 [2024-04-24 10:28:15.291081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.291367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.291399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.232 [2024-04-24 10:28:15.291421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.232 [2024-04-24 10:28:15.291701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.232 [2024-04-24 10:28:15.291825] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.232 [2024-04-24 10:28:15.291835] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.232 [2024-04-24 10:28:15.291845] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.232 [2024-04-24 10:28:15.293477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.232 [2024-04-24 10:28:15.302688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.232 [2024-04-24 10:28:15.303080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.303312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.232 [2024-04-24 10:28:15.303343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.232 [2024-04-24 10:28:15.303364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.232 [2024-04-24 10:28:15.303794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.232 [2024-04-24 10:28:15.303918] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.232 [2024-04-24 10:28:15.303927] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.232 [2024-04-24 10:28:15.303933] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.233 [2024-04-24 10:28:15.305479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.233 [2024-04-24 10:28:15.314580] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.233 [2024-04-24 10:28:15.314957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.315184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.315196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.233 [2024-04-24 10:28:15.315203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.233 [2024-04-24 10:28:15.315311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.233 [2024-04-24 10:28:15.315406] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.233 [2024-04-24 10:28:15.315414] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.233 [2024-04-24 10:28:15.315420] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.233 [2024-04-24 10:28:15.317212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.233 [2024-04-24 10:28:15.326505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.233 [2024-04-24 10:28:15.326829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.327148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.327181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.233 [2024-04-24 10:28:15.327203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.233 [2024-04-24 10:28:15.327686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.233 [2024-04-24 10:28:15.327781] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.233 [2024-04-24 10:28:15.327790] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.233 [2024-04-24 10:28:15.327796] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.233 [2024-04-24 10:28:15.329482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.233 [2024-04-24 10:28:15.338372] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.233 [2024-04-24 10:28:15.338717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.338998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.339029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.233 [2024-04-24 10:28:15.339051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.233 [2024-04-24 10:28:15.339443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.233 [2024-04-24 10:28:15.339677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.233 [2024-04-24 10:28:15.339686] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.233 [2024-04-24 10:28:15.339692] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.233 [2024-04-24 10:28:15.342086] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.233 [2024-04-24 10:28:15.350735] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.233 [2024-04-24 10:28:15.351110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.351427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.351459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.233 [2024-04-24 10:28:15.351480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.233 [2024-04-24 10:28:15.351710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.233 [2024-04-24 10:28:15.351840] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.233 [2024-04-24 10:28:15.351849] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.233 [2024-04-24 10:28:15.351856] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.233 [2024-04-24 10:28:15.353592] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.233 [2024-04-24 10:28:15.362815] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.233 [2024-04-24 10:28:15.363149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.363396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.363407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.233 [2024-04-24 10:28:15.363414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.233 [2024-04-24 10:28:15.363502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.233 [2024-04-24 10:28:15.363589] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.233 [2024-04-24 10:28:15.363598] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.233 [2024-04-24 10:28:15.363605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.233 [2024-04-24 10:28:15.365348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.233 [2024-04-24 10:28:15.374834] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.233 [2024-04-24 10:28:15.375230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.375503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.375515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.233 [2024-04-24 10:28:15.375522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.233 [2024-04-24 10:28:15.375613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.233 [2024-04-24 10:28:15.375733] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.233 [2024-04-24 10:28:15.375743] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.233 [2024-04-24 10:28:15.375750] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.233 [2024-04-24 10:28:15.377483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.233 [2024-04-24 10:28:15.387087] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.233 [2024-04-24 10:28:15.387467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.387736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.387748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.233 [2024-04-24 10:28:15.387755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.233 [2024-04-24 10:28:15.387924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.233 [2024-04-24 10:28:15.388029] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.233 [2024-04-24 10:28:15.388039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.233 [2024-04-24 10:28:15.388046] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.233 [2024-04-24 10:28:15.389953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.233 [2024-04-24 10:28:15.399265] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.233 [2024-04-24 10:28:15.399718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.400014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.233 [2024-04-24 10:28:15.400026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.233 [2024-04-24 10:28:15.400034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.233 [2024-04-24 10:28:15.400168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.233 [2024-04-24 10:28:15.400314] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.233 [2024-04-24 10:28:15.400324] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.233 [2024-04-24 10:28:15.400331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.233 [2024-04-24 10:28:15.402379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.233 [2024-04-24 10:28:15.411283] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.234 [2024-04-24 10:28:15.411581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.411884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.411896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.234 [2024-04-24 10:28:15.411904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.234 [2024-04-24 10:28:15.412017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.234 [2024-04-24 10:28:15.412151] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.234 [2024-04-24 10:28:15.412161] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.234 [2024-04-24 10:28:15.412168] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.234 [2024-04-24 10:28:15.414538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.234 [2024-04-24 10:28:15.423526] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.234 [2024-04-24 10:28:15.423929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.424180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.424212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.234 [2024-04-24 10:28:15.424235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.234 [2024-04-24 10:28:15.424516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.234 [2024-04-24 10:28:15.424948] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.234 [2024-04-24 10:28:15.424973] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.234 [2024-04-24 10:28:15.424993] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.234 [2024-04-24 10:28:15.427089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.234 [2024-04-24 10:28:15.435464] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.234 [2024-04-24 10:28:15.435765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.436068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.436113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.234 [2024-04-24 10:28:15.436136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.234 [2024-04-24 10:28:15.436663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.234 [2024-04-24 10:28:15.436945] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.234 [2024-04-24 10:28:15.436970] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.234 [2024-04-24 10:28:15.436991] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.234 [2024-04-24 10:28:15.438980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.234 [2024-04-24 10:28:15.447623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.234 [2024-04-24 10:28:15.448135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.448430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.448469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.234 [2024-04-24 10:28:15.448491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.234 [2024-04-24 10:28:15.448870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.234 [2024-04-24 10:28:15.449177] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.234 [2024-04-24 10:28:15.449188] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.234 [2024-04-24 10:28:15.449195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.234 [2024-04-24 10:28:15.450913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.234 [2024-04-24 10:28:15.459436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.234 [2024-04-24 10:28:15.459853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.460168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.460200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.234 [2024-04-24 10:28:15.460221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.234 [2024-04-24 10:28:15.460419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.234 [2024-04-24 10:28:15.460528] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.234 [2024-04-24 10:28:15.460537] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.234 [2024-04-24 10:28:15.460543] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.234 [2024-04-24 10:28:15.462371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.234 [2024-04-24 10:28:15.471243] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.234 [2024-04-24 10:28:15.471688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.471953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.471983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.234 [2024-04-24 10:28:15.472005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.234 [2024-04-24 10:28:15.472260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.234 [2024-04-24 10:28:15.472384] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.234 [2024-04-24 10:28:15.472393] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.234 [2024-04-24 10:28:15.472400] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.234 [2024-04-24 10:28:15.474061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.234 [2024-04-24 10:28:15.483014] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.234 [2024-04-24 10:28:15.483423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.483758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.483789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.234 [2024-04-24 10:28:15.483817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.234 [2024-04-24 10:28:15.484261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.234 [2024-04-24 10:28:15.484399] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.234 [2024-04-24 10:28:15.484408] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.234 [2024-04-24 10:28:15.484414] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.234 [2024-04-24 10:28:15.486008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.234 [2024-04-24 10:28:15.494868] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.234 [2024-04-24 10:28:15.495315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.495600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.234 [2024-04-24 10:28:15.495630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.234 [2024-04-24 10:28:15.495652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.234 [2024-04-24 10:28:15.496170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.234 [2024-04-24 10:28:15.496280] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.234 [2024-04-24 10:28:15.496289] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.235 [2024-04-24 10:28:15.496295] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.235 [2024-04-24 10:28:15.497833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.235 [2024-04-24 10:28:15.506881] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.235 [2024-04-24 10:28:15.507242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.235 [2024-04-24 10:28:15.507445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.235 [2024-04-24 10:28:15.507457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.235 [2024-04-24 10:28:15.507464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.235 [2024-04-24 10:28:15.507593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.235 [2024-04-24 10:28:15.507693] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.235 [2024-04-24 10:28:15.507715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.235 [2024-04-24 10:28:15.507721] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.496 [2024-04-24 10:28:15.509484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.496 [2024-04-24 10:28:15.518836] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.496 [2024-04-24 10:28:15.519305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-04-24 10:28:15.519535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-04-24 10:28:15.519565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.496 [2024-04-24 10:28:15.519587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.496 [2024-04-24 10:28:15.520089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.496 [2024-04-24 10:28:15.520288] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.496 [2024-04-24 10:28:15.520297] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.496 [2024-04-24 10:28:15.520303] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.496 [2024-04-24 10:28:15.522045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.496 [2024-04-24 10:28:15.530761] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.496 [2024-04-24 10:28:15.531184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.496 [2024-04-24 10:28:15.531412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.531444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.531465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.531844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.532092] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.497 [2024-04-24 10:28:15.532118] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.497 [2024-04-24 10:28:15.532139] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.497 [2024-04-24 10:28:15.533945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.497 [2024-04-24 10:28:15.542551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.497 [2024-04-24 10:28:15.543009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.543314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.543347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.543369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.543506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.543642] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.497 [2024-04-24 10:28:15.543651] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.497 [2024-04-24 10:28:15.543657] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.497 [2024-04-24 10:28:15.545268] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.497 [2024-04-24 10:28:15.554149] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.497 [2024-04-24 10:28:15.554609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.554894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.554925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.554947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.555290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.555532] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.497 [2024-04-24 10:28:15.555541] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.497 [2024-04-24 10:28:15.555547] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.497 [2024-04-24 10:28:15.557198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.497 [2024-04-24 10:28:15.565836] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.497 [2024-04-24 10:28:15.566283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.566601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.566632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.566654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.567099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.567686] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.497 [2024-04-24 10:28:15.567695] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.497 [2024-04-24 10:28:15.567701] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.497 [2024-04-24 10:28:15.569502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.497 [2024-04-24 10:28:15.577736] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.497 [2024-04-24 10:28:15.578160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.578371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.578401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.578423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.578713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.578794] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.497 [2024-04-24 10:28:15.578803] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.497 [2024-04-24 10:28:15.578810] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.497 [2024-04-24 10:28:15.580371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.497 [2024-04-24 10:28:15.589485] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.497 [2024-04-24 10:28:15.589912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.590249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.590282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.590305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.590520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.590657] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.497 [2024-04-24 10:28:15.590669] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.497 [2024-04-24 10:28:15.590675] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.497 [2024-04-24 10:28:15.592428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.497 [2024-04-24 10:28:15.601322] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.497 [2024-04-24 10:28:15.601772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.602097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.602130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.602152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.602361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.602428] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.497 [2024-04-24 10:28:15.602437] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.497 [2024-04-24 10:28:15.602443] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.497 [2024-04-24 10:28:15.604709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.497 [2024-04-24 10:28:15.613843] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.497 [2024-04-24 10:28:15.614308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.614626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.614657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.614678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.615058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.615321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.497 [2024-04-24 10:28:15.615331] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.497 [2024-04-24 10:28:15.615338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.497 [2024-04-24 10:28:15.617063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.497 [2024-04-24 10:28:15.625643] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.497 [2024-04-24 10:28:15.626043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.626369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.626401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.626424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.626647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.626756] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.497 [2024-04-24 10:28:15.626765] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.497 [2024-04-24 10:28:15.626774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.497 [2024-04-24 10:28:15.628414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.497 [2024-04-24 10:28:15.637363] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.497 [2024-04-24 10:28:15.637665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.637890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.497 [2024-04-24 10:28:15.637921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.497 [2024-04-24 10:28:15.637943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.497 [2024-04-24 10:28:15.638288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.497 [2024-04-24 10:28:15.638471] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.638481] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.638486] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.640122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.649142] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.649555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.649776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.649807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.498 [2024-04-24 10:28:15.649828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.498 [2024-04-24 10:28:15.650122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.498 [2024-04-24 10:28:15.650457] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.650466] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.650472] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.652274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.660864] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.661267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.661559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.661571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.498 [2024-04-24 10:28:15.661577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.498 [2024-04-24 10:28:15.661671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.498 [2024-04-24 10:28:15.661807] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.661816] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.661821] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.663517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.672871] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.673137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.673433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.673464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.498 [2024-04-24 10:28:15.673486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.498 [2024-04-24 10:28:15.673914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.498 [2024-04-24 10:28:15.674314] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.674340] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.674360] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.676374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.684708] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.685136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.685430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.685461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.498 [2024-04-24 10:28:15.685482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.498 [2024-04-24 10:28:15.685934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.498 [2024-04-24 10:28:15.686056] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.686065] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.686076] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.687671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.696621] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.697033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.697278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.697311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.498 [2024-04-24 10:28:15.697332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.498 [2024-04-24 10:28:15.697706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.498 [2024-04-24 10:28:15.697787] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.697795] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.697801] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.699533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.708665] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.709093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.709433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.709464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.498 [2024-04-24 10:28:15.709487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.498 [2024-04-24 10:28:15.709695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.498 [2024-04-24 10:28:15.709825] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.709835] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.709841] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.711497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.720508] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.720954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.721169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.721181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.498 [2024-04-24 10:28:15.721188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.498 [2024-04-24 10:28:15.721310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.498 [2024-04-24 10:28:15.721432] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.721441] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.721447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.723015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.732102] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.732511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.732776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.732807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.498 [2024-04-24 10:28:15.732828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.498 [2024-04-24 10:28:15.732988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.498 [2024-04-24 10:28:15.733117] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.733126] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.733132] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.734877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.743929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.744407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.744712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.498 [2024-04-24 10:28:15.744744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.498 [2024-04-24 10:28:15.744766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.498 [2024-04-24 10:28:15.744970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.498 [2024-04-24 10:28:15.745037] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.498 [2024-04-24 10:28:15.745046] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.498 [2024-04-24 10:28:15.745052] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.498 [2024-04-24 10:28:15.746718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.498 [2024-04-24 10:28:15.755739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.498 [2024-04-24 10:28:15.756154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-04-24 10:28:15.756446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-04-24 10:28:15.756477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.499 [2024-04-24 10:28:15.756499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.499 [2024-04-24 10:28:15.756929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.499 [2024-04-24 10:28:15.757116] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.499 [2024-04-24 10:28:15.757126] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.499 [2024-04-24 10:28:15.757133] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.499 [2024-04-24 10:28:15.758755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.499 [2024-04-24 10:28:15.767520] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.499 [2024-04-24 10:28:15.767887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-04-24 10:28:15.768040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.499 [2024-04-24 10:28:15.768051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.499 [2024-04-24 10:28:15.768058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.499 [2024-04-24 10:28:15.768150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.499 [2024-04-24 10:28:15.768238] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.499 [2024-04-24 10:28:15.768247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.499 [2024-04-24 10:28:15.768254] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.499 [2024-04-24 10:28:15.770221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.760 [2024-04-24 10:28:15.779255] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.760 [2024-04-24 10:28:15.779711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.780005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.780016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.760 [2024-04-24 10:28:15.780026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.760 [2024-04-24 10:28:15.780120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.760 [2024-04-24 10:28:15.780238] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.760 [2024-04-24 10:28:15.780247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.760 [2024-04-24 10:28:15.780254] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.760 [2024-04-24 10:28:15.781840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.760 [2024-04-24 10:28:15.791140] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.760 [2024-04-24 10:28:15.791547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.791751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.791782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.760 [2024-04-24 10:28:15.791804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.760 [2024-04-24 10:28:15.792100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.760 [2024-04-24 10:28:15.792253] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.760 [2024-04-24 10:28:15.792262] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.760 [2024-04-24 10:28:15.792268] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.760 [2024-04-24 10:28:15.793992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.760 [2024-04-24 10:28:15.802951] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.760 [2024-04-24 10:28:15.803353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.803652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.803683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.760 [2024-04-24 10:28:15.803704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.760 [2024-04-24 10:28:15.804044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.760 [2024-04-24 10:28:15.804200] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.760 [2024-04-24 10:28:15.804209] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.760 [2024-04-24 10:28:15.804215] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.760 [2024-04-24 10:28:15.805797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.760 [2024-04-24 10:28:15.814833] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.760 [2024-04-24 10:28:15.815204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.815443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.815453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.760 [2024-04-24 10:28:15.815460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.760 [2024-04-24 10:28:15.815600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.760 [2024-04-24 10:28:15.815707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.760 [2024-04-24 10:28:15.815716] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.760 [2024-04-24 10:28:15.815722] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.760 [2024-04-24 10:28:15.817266] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.760 [2024-04-24 10:28:15.826740] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.760 [2024-04-24 10:28:15.827130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.827396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.827426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.760 [2024-04-24 10:28:15.827448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.760 [2024-04-24 10:28:15.827827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.760 [2024-04-24 10:28:15.828134] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.760 [2024-04-24 10:28:15.828144] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.760 [2024-04-24 10:28:15.828150] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.760 [2024-04-24 10:28:15.829689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.760 [2024-04-24 10:28:15.838673] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.760 [2024-04-24 10:28:15.839075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.839409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.839440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.760 [2024-04-24 10:28:15.839462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.760 [2024-04-24 10:28:15.839841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.760 [2024-04-24 10:28:15.840035] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.760 [2024-04-24 10:28:15.840044] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.760 [2024-04-24 10:28:15.840050] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.760 [2024-04-24 10:28:15.841876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.760 [2024-04-24 10:28:15.850543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.760 [2024-04-24 10:28:15.850971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.851304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.760 [2024-04-24 10:28:15.851339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.760 [2024-04-24 10:28:15.851361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.760 [2024-04-24 10:28:15.851593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.852031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.852056] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.852090] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.761 [2024-04-24 10:28:15.854039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.761 [2024-04-24 10:28:15.862601] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.761 [2024-04-24 10:28:15.863041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.863368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.863399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.761 [2024-04-24 10:28:15.863423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.761 [2024-04-24 10:28:15.863852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.864059] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.864067] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.864078] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.761 [2024-04-24 10:28:15.865863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.761 [2024-04-24 10:28:15.874349] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.761 [2024-04-24 10:28:15.874791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.875106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.875138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.761 [2024-04-24 10:28:15.875160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.761 [2024-04-24 10:28:15.875590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.876021] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.876046] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.876066] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.761 [2024-04-24 10:28:15.877917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.761 [2024-04-24 10:28:15.886243] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.761 [2024-04-24 10:28:15.886538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.886831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.886842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.761 [2024-04-24 10:28:15.886864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.761 [2024-04-24 10:28:15.887310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.887490] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.887501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.887507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.761 [2024-04-24 10:28:15.889090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.761 [2024-04-24 10:28:15.897999] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.761 [2024-04-24 10:28:15.898415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.898675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.898706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.761 [2024-04-24 10:28:15.898729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.761 [2024-04-24 10:28:15.899025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.899126] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.899134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.899140] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.761 [2024-04-24 10:28:15.901035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.761 [2024-04-24 10:28:15.909868] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.761 [2024-04-24 10:28:15.910232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.910454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.910485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.761 [2024-04-24 10:28:15.910507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.761 [2024-04-24 10:28:15.910786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.911106] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.911116] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.911122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.761 [2024-04-24 10:28:15.912700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.761 [2024-04-24 10:28:15.921730] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.761 [2024-04-24 10:28:15.922061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.922370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.922402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.761 [2024-04-24 10:28:15.922424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.761 [2024-04-24 10:28:15.922804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.923090] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.923099] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.923110] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.761 [2024-04-24 10:28:15.924662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.761 [2024-04-24 10:28:15.933645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.761 [2024-04-24 10:28:15.934034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.934390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.934422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.761 [2024-04-24 10:28:15.934443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.761 [2024-04-24 10:28:15.934653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.934777] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.934786] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.934792] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.761 [2024-04-24 10:28:15.936593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.761 [2024-04-24 10:28:15.945603] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.761 [2024-04-24 10:28:15.946045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.946327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.946358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.761 [2024-04-24 10:28:15.946380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.761 [2024-04-24 10:28:15.946633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.946763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.946772] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.946778] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.761 [2024-04-24 10:28:15.948514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.761 [2024-04-24 10:28:15.957496] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.761 [2024-04-24 10:28:15.957842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.958025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.761 [2024-04-24 10:28:15.958056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.761 [2024-04-24 10:28:15.958091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.761 [2024-04-24 10:28:15.958319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.761 [2024-04-24 10:28:15.958407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.761 [2024-04-24 10:28:15.958417] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.761 [2024-04-24 10:28:15.958423] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.762 [2024-04-24 10:28:15.960137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.762 [2024-04-24 10:28:15.969185] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.762 [2024-04-24 10:28:15.969601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:15.969936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:15.969968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.762 [2024-04-24 10:28:15.969990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.762 [2024-04-24 10:28:15.970237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.762 [2024-04-24 10:28:15.970346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.762 [2024-04-24 10:28:15.970356] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.762 [2024-04-24 10:28:15.970361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.762 [2024-04-24 10:28:15.972080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.762 [2024-04-24 10:28:15.981037] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.762 [2024-04-24 10:28:15.981452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:15.981715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:15.981746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.762 [2024-04-24 10:28:15.981768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.762 [2024-04-24 10:28:15.982161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.762 [2024-04-24 10:28:15.982516] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.762 [2024-04-24 10:28:15.982525] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.762 [2024-04-24 10:28:15.982531] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.762 [2024-04-24 10:28:15.984166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.762 [2024-04-24 10:28:15.992831] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.762 [2024-04-24 10:28:15.993211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:15.993509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:15.993540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.762 [2024-04-24 10:28:15.993562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.762 [2024-04-24 10:28:15.993942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.762 [2024-04-24 10:28:15.994305] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.762 [2024-04-24 10:28:15.994315] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.762 [2024-04-24 10:28:15.994322] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.762 [2024-04-24 10:28:15.995990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.762 [2024-04-24 10:28:16.004808] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.762 [2024-04-24 10:28:16.005257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:16.005545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:16.005576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.762 [2024-04-24 10:28:16.005598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.762 [2024-04-24 10:28:16.005872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.762 [2024-04-24 10:28:16.005953] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.762 [2024-04-24 10:28:16.005961] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.762 [2024-04-24 10:28:16.005967] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.762 [2024-04-24 10:28:16.007404] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.762 [2024-04-24 10:28:16.016602] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.762 [2024-04-24 10:28:16.017085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:16.017359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:16.017390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.762 [2024-04-24 10:28:16.017412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.762 [2024-04-24 10:28:16.017741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.762 [2024-04-24 10:28:16.017823] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.762 [2024-04-24 10:28:16.017832] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.762 [2024-04-24 10:28:16.017838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.762 [2024-04-24 10:28:16.019412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:02.762 [2024-04-24 10:28:16.028396] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:02.762 [2024-04-24 10:28:16.028830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:16.028967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:02.762 [2024-04-24 10:28:16.028997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:02.762 [2024-04-24 10:28:16.029020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:02.762 [2024-04-24 10:28:16.029365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:02.762 [2024-04-24 10:28:16.029460] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:02.762 [2024-04-24 10:28:16.029469] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:02.762 [2024-04-24 10:28:16.029476] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:02.762 [2024-04-24 10:28:16.031088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.023 [2024-04-24 10:28:16.040285] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.023 [2024-04-24 10:28:16.040665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.040988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.041019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.041042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.041440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.024 [2024-04-24 10:28:16.041607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.024 [2024-04-24 10:28:16.041616] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.024 [2024-04-24 10:28:16.041623] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.024 [2024-04-24 10:28:16.043422] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.024 [2024-04-24 10:28:16.052178] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.024 [2024-04-24 10:28:16.052596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.052962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.052992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.053013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.053406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.024 [2024-04-24 10:28:16.053716] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.024 [2024-04-24 10:28:16.053725] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.024 [2024-04-24 10:28:16.053731] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.024 [2024-04-24 10:28:16.055340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.024 [2024-04-24 10:28:16.064184] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.024 [2024-04-24 10:28:16.064594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.064882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.064912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.064934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.065478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.024 [2024-04-24 10:28:16.065602] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.024 [2024-04-24 10:28:16.065611] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.024 [2024-04-24 10:28:16.065616] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.024 [2024-04-24 10:28:16.067393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.024 [2024-04-24 10:28:16.076177] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.024 [2024-04-24 10:28:16.076533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.076760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.076798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.076820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.077163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.024 [2024-04-24 10:28:16.077596] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.024 [2024-04-24 10:28:16.077621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.024 [2024-04-24 10:28:16.077642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.024 [2024-04-24 10:28:16.079658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.024 [2024-04-24 10:28:16.088216] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.024 [2024-04-24 10:28:16.088637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.088859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.088871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.088878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.088978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.024 [2024-04-24 10:28:16.089098] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.024 [2024-04-24 10:28:16.089107] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.024 [2024-04-24 10:28:16.089114] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.024 [2024-04-24 10:28:16.090876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.024 [2024-04-24 10:28:16.100150] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.024 [2024-04-24 10:28:16.100568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.100847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.100878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.100899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.101245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.024 [2024-04-24 10:28:16.101677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.024 [2024-04-24 10:28:16.101702] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.024 [2024-04-24 10:28:16.101723] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.024 [2024-04-24 10:28:16.103428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.024 [2024-04-24 10:28:16.111960] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.024 [2024-04-24 10:28:16.112362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.112641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.112672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.112701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.113033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.024 [2024-04-24 10:28:16.113275] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.024 [2024-04-24 10:28:16.113300] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.024 [2024-04-24 10:28:16.113321] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.024 [2024-04-24 10:28:16.115417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.024 [2024-04-24 10:28:16.123814] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.024 [2024-04-24 10:28:16.124147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.124368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.124398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.124421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.124800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.024 [2024-04-24 10:28:16.124987] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.024 [2024-04-24 10:28:16.124996] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.024 [2024-04-24 10:28:16.125002] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.024 [2024-04-24 10:28:16.126548] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.024 [2024-04-24 10:28:16.135719] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.024 [2024-04-24 10:28:16.136131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.136419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.136450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.136472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.136896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.024 [2024-04-24 10:28:16.136996] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.024 [2024-04-24 10:28:16.137005] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.024 [2024-04-24 10:28:16.137012] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.024 [2024-04-24 10:28:16.138636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.024 [2024-04-24 10:28:16.147675] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.024 [2024-04-24 10:28:16.148118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.148389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.024 [2024-04-24 10:28:16.148401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.024 [2024-04-24 10:28:16.148408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.024 [2024-04-24 10:28:16.148510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.148624] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.148634] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.148641] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.150311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.025 [2024-04-24 10:28:16.159603] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.025 [2024-04-24 10:28:16.159996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.160338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.160372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.025 [2024-04-24 10:28:16.160394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.025 [2024-04-24 10:28:16.160630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.160712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.160721] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.160728] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.162546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.025 [2024-04-24 10:28:16.171507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.025 [2024-04-24 10:28:16.171921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.172260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.172293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.025 [2024-04-24 10:28:16.172314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.025 [2024-04-24 10:28:16.172645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.172927] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.172953] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.172974] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.174732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.025 [2024-04-24 10:28:16.183281] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.025 [2024-04-24 10:28:16.183610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.183961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.183993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.025 [2024-04-24 10:28:16.184015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.025 [2024-04-24 10:28:16.184456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.184748] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.184773] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.184793] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.186825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.025 [2024-04-24 10:28:16.195157] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.025 [2024-04-24 10:28:16.195556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.195788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.195819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.025 [2024-04-24 10:28:16.195841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.025 [2024-04-24 10:28:16.196234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.196617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.196641] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.196662] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.198713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.025 [2024-04-24 10:28:16.207132] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.025 [2024-04-24 10:28:16.207595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.207893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.207925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.025 [2024-04-24 10:28:16.207946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.025 [2024-04-24 10:28:16.208188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.208379] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.208392] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.208401] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.210726] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.025 [2024-04-24 10:28:16.219566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.025 [2024-04-24 10:28:16.219957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.220252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.220264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.025 [2024-04-24 10:28:16.220270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.025 [2024-04-24 10:28:16.220381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.220492] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.220505] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.220511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.222258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.025 [2024-04-24 10:28:16.231514] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.025 [2024-04-24 10:28:16.231941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.232282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.232315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.025 [2024-04-24 10:28:16.232337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.025 [2024-04-24 10:28:16.232767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.233161] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.233170] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.233177] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.234758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.025 [2024-04-24 10:28:16.243389] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.025 [2024-04-24 10:28:16.243820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.244101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.244134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.025 [2024-04-24 10:28:16.244156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.025 [2024-04-24 10:28:16.244285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.244408] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.244417] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.244423] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.246018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.025 [2024-04-24 10:28:16.255176] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.025 [2024-04-24 10:28:16.255585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.255930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.025 [2024-04-24 10:28:16.255961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.025 [2024-04-24 10:28:16.255982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.025 [2024-04-24 10:28:16.256190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.025 [2024-04-24 10:28:16.256285] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.025 [2024-04-24 10:28:16.256294] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.025 [2024-04-24 10:28:16.256304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.025 [2024-04-24 10:28:16.258161] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.026 [2024-04-24 10:28:16.266978] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.026 [2024-04-24 10:28:16.267395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.026 [2024-04-24 10:28:16.267687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.026 [2024-04-24 10:28:16.267718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.026 [2024-04-24 10:28:16.267740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.026 [2024-04-24 10:28:16.268182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.026 [2024-04-24 10:28:16.268331] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.026 [2024-04-24 10:28:16.268341] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.026 [2024-04-24 10:28:16.268347] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.026 [2024-04-24 10:28:16.269912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.026 [2024-04-24 10:28:16.278824] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.026 [2024-04-24 10:28:16.279291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.026 [2024-04-24 10:28:16.279640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.026 [2024-04-24 10:28:16.279670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.026 [2024-04-24 10:28:16.279692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.026 [2024-04-24 10:28:16.279888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.026 [2024-04-24 10:28:16.279997] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.026 [2024-04-24 10:28:16.280007] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.026 [2024-04-24 10:28:16.280013] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.026 [2024-04-24 10:28:16.281724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.026 [2024-04-24 10:28:16.290736] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.026 [2024-04-24 10:28:16.291066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.026 [2024-04-24 10:28:16.291370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.026 [2024-04-24 10:28:16.291401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.026 [2024-04-24 10:28:16.291423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.026 [2024-04-24 10:28:16.291902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.026 [2024-04-24 10:28:16.292148] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.026 [2024-04-24 10:28:16.292175] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.026 [2024-04-24 10:28:16.292195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.026 [2024-04-24 10:28:16.293939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.287 [2024-04-24 10:28:16.302721] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.287 [2024-04-24 10:28:16.303158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.287 [2024-04-24 10:28:16.303437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.287 [2024-04-24 10:28:16.303468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.287 [2024-04-24 10:28:16.303490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.287 [2024-04-24 10:28:16.303919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.287 [2024-04-24 10:28:16.304139] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.287 [2024-04-24 10:28:16.304148] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.287 [2024-04-24 10:28:16.304154] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.287 [2024-04-24 10:28:16.305948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.287 [2024-04-24 10:28:16.314465] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.287 [2024-04-24 10:28:16.314916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.287 [2024-04-24 10:28:16.315184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.287 [2024-04-24 10:28:16.315218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.287 [2024-04-24 10:28:16.315241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.287 [2024-04-24 10:28:16.315461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.287 [2024-04-24 10:28:16.315612] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.287 [2024-04-24 10:28:16.315622] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.287 [2024-04-24 10:28:16.315628] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.287 [2024-04-24 10:28:16.317253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.287 [2024-04-24 10:28:16.326369] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.287 [2024-04-24 10:28:16.326700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.287 [2024-04-24 10:28:16.326975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.287 [2024-04-24 10:28:16.327006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.287 [2024-04-24 10:28:16.327029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.287 [2024-04-24 10:28:16.327374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.287 [2024-04-24 10:28:16.327522] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.287 [2024-04-24 10:28:16.327532] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.287 [2024-04-24 10:28:16.327538] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.287 [2024-04-24 10:28:16.329080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.287 [2024-04-24 10:28:16.338094] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.287 [2024-04-24 10:28:16.338501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.287 [2024-04-24 10:28:16.338812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.287 [2024-04-24 10:28:16.338843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.287 [2024-04-24 10:28:16.338864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.288 [2024-04-24 10:28:16.339158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.288 [2024-04-24 10:28:16.339427] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.288 [2024-04-24 10:28:16.339436] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.288 [2024-04-24 10:28:16.339442] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.288 [2024-04-24 10:28:16.341950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.288 [2024-04-24 10:28:16.350543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.288 [2024-04-24 10:28:16.350991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.351284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.351316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.288 [2024-04-24 10:28:16.351338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.288 [2024-04-24 10:28:16.351435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.288 [2024-04-24 10:28:16.351546] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.288 [2024-04-24 10:28:16.351555] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.288 [2024-04-24 10:28:16.351561] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.288 [2024-04-24 10:28:16.353306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.288 [2024-04-24 10:28:16.362189] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.288 [2024-04-24 10:28:16.362598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.362966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.362996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.288 [2024-04-24 10:28:16.363017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.288 [2024-04-24 10:28:16.363140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.288 [2024-04-24 10:28:16.363235] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.288 [2024-04-24 10:28:16.363243] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.288 [2024-04-24 10:28:16.363248] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.288 [2024-04-24 10:28:16.364897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.288 [2024-04-24 10:28:16.374100] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.288 [2024-04-24 10:28:16.374480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.374764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.374795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.288 [2024-04-24 10:28:16.374817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.288 [2024-04-24 10:28:16.375119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.288 [2024-04-24 10:28:16.375271] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.288 [2024-04-24 10:28:16.375280] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.288 [2024-04-24 10:28:16.375286] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.288 [2024-04-24 10:28:16.377005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.288 [2024-04-24 10:28:16.386022] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.288 [2024-04-24 10:28:16.386371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.386711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.386743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.288 [2024-04-24 10:28:16.386765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.288 [2024-04-24 10:28:16.387107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.288 [2024-04-24 10:28:16.387442] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.288 [2024-04-24 10:28:16.387469] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.288 [2024-04-24 10:28:16.387475] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.288 [2024-04-24 10:28:16.389029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.288 [2024-04-24 10:28:16.397860] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.288 [2024-04-24 10:28:16.398311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.398534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.398565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.288 [2024-04-24 10:28:16.398586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.288 [2024-04-24 10:28:16.398916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.288 [2024-04-24 10:28:16.399364] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.288 [2024-04-24 10:28:16.399391] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.288 [2024-04-24 10:28:16.399410] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.288 [2024-04-24 10:28:16.401277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.288 [2024-04-24 10:28:16.409736] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.288 [2024-04-24 10:28:16.410169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.410533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.410565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.288 [2024-04-24 10:28:16.410593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.288 [2024-04-24 10:28:16.410771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.288 [2024-04-24 10:28:16.410866] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.288 [2024-04-24 10:28:16.410874] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.288 [2024-04-24 10:28:16.410880] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.288 [2024-04-24 10:28:16.412385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.288 [2024-04-24 10:28:16.421701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.288 [2024-04-24 10:28:16.422140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.422459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.422471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.288 [2024-04-24 10:28:16.422479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.288 [2024-04-24 10:28:16.422613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.288 [2024-04-24 10:28:16.422716] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.288 [2024-04-24 10:28:16.422725] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.288 [2024-04-24 10:28:16.422732] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.288 [2024-04-24 10:28:16.424537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.288 [2024-04-24 10:28:16.433673] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.288 [2024-04-24 10:28:16.434183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.434400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.288 [2024-04-24 10:28:16.434431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.289 [2024-04-24 10:28:16.434454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.289 [2024-04-24 10:28:16.434696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.289 [2024-04-24 10:28:16.434767] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.289 [2024-04-24 10:28:16.434777] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.289 [2024-04-24 10:28:16.434783] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.289 [2024-04-24 10:28:16.436480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.289 [2024-04-24 10:28:16.445709] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.289 [2024-04-24 10:28:16.446168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.446388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.446399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.289 [2024-04-24 10:28:16.446406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.289 [2024-04-24 10:28:16.446542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.289 [2024-04-24 10:28:16.446705] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.289 [2024-04-24 10:28:16.446714] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.289 [2024-04-24 10:28:16.446720] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.289 [2024-04-24 10:28:16.448674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.289 [2024-04-24 10:28:16.457623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.289 [2024-04-24 10:28:16.458041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.458351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.458364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.289 [2024-04-24 10:28:16.458371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.289 [2024-04-24 10:28:16.458489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.289 [2024-04-24 10:28:16.458606] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.289 [2024-04-24 10:28:16.458614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.289 [2024-04-24 10:28:16.458620] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.289 [2024-04-24 10:28:16.460470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.289 [2024-04-24 10:28:16.469431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.289 [2024-04-24 10:28:16.469883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.470148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.470167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.289 [2024-04-24 10:28:16.470174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.289 [2024-04-24 10:28:16.470291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.289 [2024-04-24 10:28:16.470407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.289 [2024-04-24 10:28:16.470415] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.289 [2024-04-24 10:28:16.470421] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.289 [2024-04-24 10:28:16.472289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.289 [2024-04-24 10:28:16.481601] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.289 [2024-04-24 10:28:16.481990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.482289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.482322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.289 [2024-04-24 10:28:16.482343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.289 [2024-04-24 10:28:16.482489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.289 [2024-04-24 10:28:16.482609] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.289 [2024-04-24 10:28:16.482617] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.289 [2024-04-24 10:28:16.482623] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.289 [2024-04-24 10:28:16.484496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.289 [2024-04-24 10:28:16.493645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.289 [2024-04-24 10:28:16.494090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.494387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.494418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.289 [2024-04-24 10:28:16.494439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.289 [2024-04-24 10:28:16.494817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.289 [2024-04-24 10:28:16.494946] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.289 [2024-04-24 10:28:16.494953] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.289 [2024-04-24 10:28:16.494960] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.289 [2024-04-24 10:28:16.496659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.289 [2024-04-24 10:28:16.505412] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.289 [2024-04-24 10:28:16.505748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.506022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.506053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.289 [2024-04-24 10:28:16.506087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.289 [2024-04-24 10:28:16.506368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.289 [2024-04-24 10:28:16.506649] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.289 [2024-04-24 10:28:16.506673] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.289 [2024-04-24 10:28:16.506693] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.289 [2024-04-24 10:28:16.508430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.289 [2024-04-24 10:28:16.517302] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.289 [2024-04-24 10:28:16.517670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.517958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.289 [2024-04-24 10:28:16.517967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.290 [2024-04-24 10:28:16.517974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.290 [2024-04-24 10:28:16.518088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.290 [2024-04-24 10:28:16.518182] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.290 [2024-04-24 10:28:16.518192] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.290 [2024-04-24 10:28:16.518198] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.290 [2024-04-24 10:28:16.519846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.290 [2024-04-24 10:28:16.529256] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.290 [2024-04-24 10:28:16.529699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.290 [2024-04-24 10:28:16.529961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.290 [2024-04-24 10:28:16.529991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.290 [2024-04-24 10:28:16.530013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.290 [2024-04-24 10:28:16.530353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.290 [2024-04-24 10:28:16.530543] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.290 [2024-04-24 10:28:16.530551] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.290 [2024-04-24 10:28:16.530558] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.290 [2024-04-24 10:28:16.532332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.290 [2024-04-24 10:28:16.541083] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.290 [2024-04-24 10:28:16.541505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.290 [2024-04-24 10:28:16.541734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.290 [2024-04-24 10:28:16.541766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.290 [2024-04-24 10:28:16.541788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.290 [2024-04-24 10:28:16.542131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.290 [2024-04-24 10:28:16.542465] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.290 [2024-04-24 10:28:16.542489] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.290 [2024-04-24 10:28:16.542509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.290 [2024-04-24 10:28:16.544371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.290 [2024-04-24 10:28:16.552993] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.290 [2024-04-24 10:28:16.553383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.290 [2024-04-24 10:28:16.553653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.290 [2024-04-24 10:28:16.553683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.290 [2024-04-24 10:28:16.553704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.290 [2024-04-24 10:28:16.553937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.290 [2024-04-24 10:28:16.554030] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.290 [2024-04-24 10:28:16.554037] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.290 [2024-04-24 10:28:16.554046] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.290 [2024-04-24 10:28:16.555661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.551 [2024-04-24 10:28:16.565102] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.551 [2024-04-24 10:28:16.565480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.551 [2024-04-24 10:28:16.565699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.551 [2024-04-24 10:28:16.565729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.551 [2024-04-24 10:28:16.565750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.551 [2024-04-24 10:28:16.566092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.552 [2024-04-24 10:28:16.566523] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.552 [2024-04-24 10:28:16.566547] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.552 [2024-04-24 10:28:16.566567] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.552 [2024-04-24 10:28:16.568551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.552 [2024-04-24 10:28:16.576967] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.552 [2024-04-24 10:28:16.577319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.577611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.577640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.552 [2024-04-24 10:28:16.577662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.552 [2024-04-24 10:28:16.578039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.552 [2024-04-24 10:28:16.578186] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.552 [2024-04-24 10:28:16.578195] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.552 [2024-04-24 10:28:16.578201] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.552 [2024-04-24 10:28:16.579974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.552 [2024-04-24 10:28:16.588953] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.552 [2024-04-24 10:28:16.589324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.589640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.589670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.552 [2024-04-24 10:28:16.589699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.552 [2024-04-24 10:28:16.589807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.552 [2024-04-24 10:28:16.589900] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.552 [2024-04-24 10:28:16.589907] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.552 [2024-04-24 10:28:16.589913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.552 [2024-04-24 10:28:16.591545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.552 [2024-04-24 10:28:16.600801] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.552 [2024-04-24 10:28:16.601270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.601589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.601620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.552 [2024-04-24 10:28:16.601642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.552 [2024-04-24 10:28:16.602022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.552 [2024-04-24 10:28:16.602241] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.552 [2024-04-24 10:28:16.602258] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.552 [2024-04-24 10:28:16.602264] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.552 [2024-04-24 10:28:16.604806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.552 [2024-04-24 10:28:16.613289] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.552 [2024-04-24 10:28:16.613567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.613745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.613775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.552 [2024-04-24 10:28:16.613797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.552 [2024-04-24 10:28:16.614203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.552 [2024-04-24 10:28:16.614300] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.552 [2024-04-24 10:28:16.614308] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.552 [2024-04-24 10:28:16.614313] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.552 [2024-04-24 10:28:16.616082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.552 [2024-04-24 10:28:16.625153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.552 [2024-04-24 10:28:16.625488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.625764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.625794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.552 [2024-04-24 10:28:16.625815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.552 [2024-04-24 10:28:16.626159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.552 [2024-04-24 10:28:16.626541] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.552 [2024-04-24 10:28:16.626564] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.552 [2024-04-24 10:28:16.626583] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.552 [2024-04-24 10:28:16.628559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.552 [2024-04-24 10:28:16.636948] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.552 [2024-04-24 10:28:16.637296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.637520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.637551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.552 [2024-04-24 10:28:16.637572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.552 [2024-04-24 10:28:16.637902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.552 [2024-04-24 10:28:16.638179] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.552 [2024-04-24 10:28:16.638187] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.552 [2024-04-24 10:28:16.638193] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.552 [2024-04-24 10:28:16.639801] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.552 [2024-04-24 10:28:16.648833] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.552 [2024-04-24 10:28:16.649187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.649430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.649439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.552 [2024-04-24 10:28:16.649446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.552 [2024-04-24 10:28:16.649539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.552 [2024-04-24 10:28:16.649632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.552 [2024-04-24 10:28:16.649640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.552 [2024-04-24 10:28:16.649646] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.552 [2024-04-24 10:28:16.651245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.552 [2024-04-24 10:28:16.660703] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.552 [2024-04-24 10:28:16.661165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.661412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.661421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.552 [2024-04-24 10:28:16.661428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.552 [2024-04-24 10:28:16.661542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.552 [2024-04-24 10:28:16.661640] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.552 [2024-04-24 10:28:16.661648] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.552 [2024-04-24 10:28:16.661654] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.552 [2024-04-24 10:28:16.663520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.552 [2024-04-24 10:28:16.672544] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.552 [2024-04-24 10:28:16.672973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.673272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.552 [2024-04-24 10:28:16.673282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.552 [2024-04-24 10:28:16.673289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.673397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.673533] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.673540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.673546] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.553 [2024-04-24 10:28:16.675250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.553 [2024-04-24 10:28:16.684337] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.553 [2024-04-24 10:28:16.684718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.684987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.685016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.553 [2024-04-24 10:28:16.685038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.685273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.685387] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.685395] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.685401] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.553 [2024-04-24 10:28:16.687043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.553 [2024-04-24 10:28:16.696039] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.553 [2024-04-24 10:28:16.696401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.696665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.696695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.553 [2024-04-24 10:28:16.696716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.697024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.697171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.697179] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.697185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.553 [2024-04-24 10:28:16.698810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.553 [2024-04-24 10:28:16.707959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.553 [2024-04-24 10:28:16.708308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.708562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.708600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.553 [2024-04-24 10:28:16.708622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.709002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.709396] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.709423] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.709442] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.553 [2024-04-24 10:28:16.711262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.553 [2024-04-24 10:28:16.719997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.553 [2024-04-24 10:28:16.720300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.720516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.720526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.553 [2024-04-24 10:28:16.720533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.720649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.720796] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.720804] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.720810] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.553 [2024-04-24 10:28:16.722664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.553 [2024-04-24 10:28:16.731731] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.553 [2024-04-24 10:28:16.732145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.732412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.732443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.553 [2024-04-24 10:28:16.732464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.732842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.733182] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.733208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.733228] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.553 [2024-04-24 10:28:16.735635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.553 [2024-04-24 10:28:16.744584] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.553 [2024-04-24 10:28:16.745004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.745394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.745426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.553 [2024-04-24 10:28:16.745464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.745563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.745647] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.745655] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.745661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.553 [2024-04-24 10:28:16.747533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.553 [2024-04-24 10:28:16.756383] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.553 [2024-04-24 10:28:16.756705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.756999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.757029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.553 [2024-04-24 10:28:16.757051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.757379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.757478] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.757485] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.757491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.553 [2024-04-24 10:28:16.759198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.553 [2024-04-24 10:28:16.768296] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.553 [2024-04-24 10:28:16.768621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.768964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.768994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.553 [2024-04-24 10:28:16.769015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.769276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.769375] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.769382] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.769388] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.553 [2024-04-24 10:28:16.771031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.553 [2024-04-24 10:28:16.780199] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.553 [2024-04-24 10:28:16.780474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.780787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.553 [2024-04-24 10:28:16.780818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.553 [2024-04-24 10:28:16.780839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.553 [2024-04-24 10:28:16.781337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.553 [2024-04-24 10:28:16.781651] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.553 [2024-04-24 10:28:16.781659] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.553 [2024-04-24 10:28:16.781664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.554 [2024-04-24 10:28:16.783126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.554 [2024-04-24 10:28:16.791983] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.554 [2024-04-24 10:28:16.792387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.554 [2024-04-24 10:28:16.792656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.554 [2024-04-24 10:28:16.792687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.554 [2024-04-24 10:28:16.792708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.554 [2024-04-24 10:28:16.793097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.554 [2024-04-24 10:28:16.793508] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.554 [2024-04-24 10:28:16.793515] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.554 [2024-04-24 10:28:16.793521] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.554 [2024-04-24 10:28:16.795204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.554 [2024-04-24 10:28:16.803900] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.554 [2024-04-24 10:28:16.804316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.554 [2024-04-24 10:28:16.804541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.554 [2024-04-24 10:28:16.804572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.554 [2024-04-24 10:28:16.804593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.554 [2024-04-24 10:28:16.804922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.554 [2024-04-24 10:28:16.805361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.554 [2024-04-24 10:28:16.805387] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.554 [2024-04-24 10:28:16.805406] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.554 [2024-04-24 10:28:16.807004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.554 [2024-04-24 10:28:16.815643] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.554 [2024-04-24 10:28:16.816093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.554 [2024-04-24 10:28:16.816414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.554 [2024-04-24 10:28:16.816444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.554 [2024-04-24 10:28:16.816465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.554 [2024-04-24 10:28:16.816649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.554 [2024-04-24 10:28:16.816759] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.554 [2024-04-24 10:28:16.816766] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.554 [2024-04-24 10:28:16.816772] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.554 [2024-04-24 10:28:16.818439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.554 [2024-04-24 10:28:16.827845] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.815 [2024-04-24 10:28:16.828269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.828441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.828450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.815 [2024-04-24 10:28:16.828457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.815 [2024-04-24 10:28:16.828603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.815 [2024-04-24 10:28:16.828690] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.815 [2024-04-24 10:28:16.828698] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.815 [2024-04-24 10:28:16.828704] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.815 [2024-04-24 10:28:16.830620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.815 [2024-04-24 10:28:16.839549] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.815 [2024-04-24 10:28:16.839960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.840218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.840250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.815 [2024-04-24 10:28:16.840272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.815 [2024-04-24 10:28:16.840601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.815 [2024-04-24 10:28:16.840779] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.815 [2024-04-24 10:28:16.840786] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.815 [2024-04-24 10:28:16.840792] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.815 [2024-04-24 10:28:16.842446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.815 [2024-04-24 10:28:16.851338] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.815 [2024-04-24 10:28:16.851751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.852014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.852043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.815 [2024-04-24 10:28:16.852065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.815 [2024-04-24 10:28:16.852456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.815 [2024-04-24 10:28:16.852780] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.815 [2024-04-24 10:28:16.852793] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.815 [2024-04-24 10:28:16.852799] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.815 [2024-04-24 10:28:16.854337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.815 [2024-04-24 10:28:16.863428] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.815 [2024-04-24 10:28:16.863847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.864101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.864131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.815 [2024-04-24 10:28:16.864153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.815 [2024-04-24 10:28:16.864449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.815 [2024-04-24 10:28:16.864517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.815 [2024-04-24 10:28:16.864525] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.815 [2024-04-24 10:28:16.864531] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.815 [2024-04-24 10:28:16.866177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.815 [2024-04-24 10:28:16.875498] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.815 [2024-04-24 10:28:16.875901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.876258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.815 [2024-04-24 10:28:16.876289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.815 [2024-04-24 10:28:16.876311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.815 [2024-04-24 10:28:16.876741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.815 [2024-04-24 10:28:16.876899] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.815 [2024-04-24 10:28:16.876907] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.815 [2024-04-24 10:28:16.876913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.815 [2024-04-24 10:28:16.878782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.815 [2024-04-24 10:28:16.887294] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.887683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.887957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.887987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.888008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.888351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.816 [2024-04-24 10:28:16.888507] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.816 [2024-04-24 10:28:16.888515] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.816 [2024-04-24 10:28:16.888524] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.816 [2024-04-24 10:28:16.890212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.816 [2024-04-24 10:28:16.899170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.899590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.899985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.900015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.900036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.900380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.816 [2024-04-24 10:28:16.900585] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.816 [2024-04-24 10:28:16.900593] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.816 [2024-04-24 10:28:16.900599] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.816 [2024-04-24 10:28:16.902135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.816 [2024-04-24 10:28:16.911022] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.911408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.911689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.911719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.911740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.912133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.816 [2024-04-24 10:28:16.912516] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.816 [2024-04-24 10:28:16.912540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.816 [2024-04-24 10:28:16.912560] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.816 [2024-04-24 10:28:16.914367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.816 [2024-04-24 10:28:16.922968] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.923328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.923553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.923582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.923604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.923833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.816 [2024-04-24 10:28:16.924278] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.816 [2024-04-24 10:28:16.924304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.816 [2024-04-24 10:28:16.924324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.816 [2024-04-24 10:28:16.926358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.816 [2024-04-24 10:28:16.934832] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.935211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.935503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.935533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.935554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.935882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.816 [2024-04-24 10:28:16.936016] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.816 [2024-04-24 10:28:16.936023] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.816 [2024-04-24 10:28:16.936029] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.816 [2024-04-24 10:28:16.937775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.816 [2024-04-24 10:28:16.946641] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.947056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.947405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.947435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.947456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.947639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.816 [2024-04-24 10:28:16.947704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.816 [2024-04-24 10:28:16.947711] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.816 [2024-04-24 10:28:16.947717] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.816 [2024-04-24 10:28:16.949412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.816 [2024-04-24 10:28:16.958534] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.958925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.959247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.959280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.959301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.959680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.816 [2024-04-24 10:28:16.959902] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.816 [2024-04-24 10:28:16.959910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.816 [2024-04-24 10:28:16.959915] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.816 [2024-04-24 10:28:16.961617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.816 [2024-04-24 10:28:16.970425] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.970796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.971095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.971126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.971147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.971526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.816 [2024-04-24 10:28:16.971955] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.816 [2024-04-24 10:28:16.971979] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.816 [2024-04-24 10:28:16.971998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.816 [2024-04-24 10:28:16.973827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.816 [2024-04-24 10:28:16.982090] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.982447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.982745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.982774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.982795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.983147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.816 [2024-04-24 10:28:16.983255] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.816 [2024-04-24 10:28:16.983262] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.816 [2024-04-24 10:28:16.983267] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.816 [2024-04-24 10:28:16.984946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.816 [2024-04-24 10:28:16.993797] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.816 [2024-04-24 10:28:16.994208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.994579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.816 [2024-04-24 10:28:16.994614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.816 [2024-04-24 10:28:16.994621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.816 [2024-04-24 10:28:16.994742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.817 [2024-04-24 10:28:16.994850] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.817 [2024-04-24 10:28:16.994857] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.817 [2024-04-24 10:28:16.994863] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.817 [2024-04-24 10:28:16.996532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.817 [2024-04-24 10:28:17.005615] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.817 [2024-04-24 10:28:17.006031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.006382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.006414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.817 [2024-04-24 10:28:17.006435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.817 [2024-04-24 10:28:17.006715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.817 [2024-04-24 10:28:17.007109] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.817 [2024-04-24 10:28:17.007142] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.817 [2024-04-24 10:28:17.007148] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.817 [2024-04-24 10:28:17.008801] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.817 [2024-04-24 10:28:17.017402] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.817 [2024-04-24 10:28:17.017843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.018314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.018350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.817 [2024-04-24 10:28:17.018371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.817 [2024-04-24 10:28:17.018703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.817 [2024-04-24 10:28:17.018854] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.817 [2024-04-24 10:28:17.018861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.817 [2024-04-24 10:28:17.018868] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.817 [2024-04-24 10:28:17.020582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.817 [2024-04-24 10:28:17.029232] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.817 [2024-04-24 10:28:17.029650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.029901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.029931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.817 [2024-04-24 10:28:17.029953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.817 [2024-04-24 10:28:17.030248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.817 [2024-04-24 10:28:17.030680] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.817 [2024-04-24 10:28:17.030704] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.817 [2024-04-24 10:28:17.030723] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.817 [2024-04-24 10:28:17.032569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.817 [2024-04-24 10:28:17.040998] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.817 [2024-04-24 10:28:17.041419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.041696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.041727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.817 [2024-04-24 10:28:17.041756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.817 [2024-04-24 10:28:17.042114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.817 [2024-04-24 10:28:17.042228] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.817 [2024-04-24 10:28:17.042236] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.817 [2024-04-24 10:28:17.042242] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.817 [2024-04-24 10:28:17.043904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.817 [2024-04-24 10:28:17.052897] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.817 [2024-04-24 10:28:17.053308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.053504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.053535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.817 [2024-04-24 10:28:17.053556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.817 [2024-04-24 10:28:17.053937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.817 [2024-04-24 10:28:17.054141] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.817 [2024-04-24 10:28:17.054149] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.817 [2024-04-24 10:28:17.054155] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.817 [2024-04-24 10:28:17.055872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.817 [2024-04-24 10:28:17.064832] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.817 [2024-04-24 10:28:17.065258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.065609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.065639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.817 [2024-04-24 10:28:17.065660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.817 [2024-04-24 10:28:17.065881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.817 [2024-04-24 10:28:17.065988] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.817 [2024-04-24 10:28:17.065996] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.817 [2024-04-24 10:28:17.066001] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.817 [2024-04-24 10:28:17.067763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.817 [2024-04-24 10:28:17.076687] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.817 [2024-04-24 10:28:17.077126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.077350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.077381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.817 [2024-04-24 10:28:17.077402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.817 [2024-04-24 10:28:17.077690] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.817 [2024-04-24 10:28:17.077939] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.817 [2024-04-24 10:28:17.077947] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.817 [2024-04-24 10:28:17.077953] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.817 [2024-04-24 10:28:17.079627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:03.817 [2024-04-24 10:28:17.088792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.817 [2024-04-24 10:28:17.089160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.089453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.817 [2024-04-24 10:28:17.089463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:03.817 [2024-04-24 10:28:17.089470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:03.817 [2024-04-24 10:28:17.089587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:03.817 [2024-04-24 10:28:17.089703] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:03.817 [2024-04-24 10:28:17.089711] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:03.817 [2024-04-24 10:28:17.089718] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.817 [2024-04-24 10:28:17.091504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.079 [2024-04-24 10:28:17.100743] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.079 [2024-04-24 10:28:17.101154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.101372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.101381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.079 [2024-04-24 10:28:17.101387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.079 [2024-04-24 10:28:17.101493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.079 [2024-04-24 10:28:17.101600] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.079 [2024-04-24 10:28:17.101607] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.079 [2024-04-24 10:28:17.101613] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.079 [2024-04-24 10:28:17.103249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.079 [2024-04-24 10:28:17.112664] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.079 [2024-04-24 10:28:17.113093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.113412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.113442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.079 [2024-04-24 10:28:17.113463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.079 [2024-04-24 10:28:17.113693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.079 [2024-04-24 10:28:17.114245] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.079 [2024-04-24 10:28:17.114273] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.079 [2024-04-24 10:28:17.114279] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.079 [2024-04-24 10:28:17.115763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.079 [2024-04-24 10:28:17.124532] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.079 [2024-04-24 10:28:17.124839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.125139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.125172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.079 [2024-04-24 10:28:17.125193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.079 [2024-04-24 10:28:17.125423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.079 [2024-04-24 10:28:17.125803] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.079 [2024-04-24 10:28:17.125827] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.079 [2024-04-24 10:28:17.125847] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.079 [2024-04-24 10:28:17.127681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.079 [2024-04-24 10:28:17.136357] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.079 [2024-04-24 10:28:17.136794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.137089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.137122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.079 [2024-04-24 10:28:17.137143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.079 [2024-04-24 10:28:17.137371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.079 [2024-04-24 10:28:17.137478] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.079 [2024-04-24 10:28:17.137485] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.079 [2024-04-24 10:28:17.137491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.079 [2024-04-24 10:28:17.139144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.079 [2024-04-24 10:28:17.148270] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.079 [2024-04-24 10:28:17.148671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.148937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.148968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.079 [2024-04-24 10:28:17.148990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.079 [2024-04-24 10:28:17.149304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.079 [2024-04-24 10:28:17.149404] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.079 [2024-04-24 10:28:17.149414] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.079 [2024-04-24 10:28:17.149420] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.079 [2024-04-24 10:28:17.151145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.079 [2024-04-24 10:28:17.160227] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.079 [2024-04-24 10:28:17.160597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.160929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.160958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.079 [2024-04-24 10:28:17.160979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.079 [2024-04-24 10:28:17.161228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.079 [2024-04-24 10:28:17.161342] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.079 [2024-04-24 10:28:17.161350] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.079 [2024-04-24 10:28:17.161356] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.079 [2024-04-24 10:28:17.163129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.079 [2024-04-24 10:28:17.171998] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.079 [2024-04-24 10:28:17.172322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.172603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.172633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.079 [2024-04-24 10:28:17.172655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.079 [2024-04-24 10:28:17.172984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.079 [2024-04-24 10:28:17.173207] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.079 [2024-04-24 10:28:17.173215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.079 [2024-04-24 10:28:17.173221] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.079 [2024-04-24 10:28:17.174948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.079 [2024-04-24 10:28:17.183832] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.079 [2024-04-24 10:28:17.184189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.185305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.079 [2024-04-24 10:28:17.185325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.079 [2024-04-24 10:28:17.185333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.079 [2024-04-24 10:28:17.185434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.079 [2024-04-24 10:28:17.185570] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.079 [2024-04-24 10:28:17.185577] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.185586] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.187294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.195745] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.195934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.196205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.196216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.080 [2024-04-24 10:28:17.196223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.080 [2024-04-24 10:28:17.196318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.080 [2024-04-24 10:28:17.196425] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.080 [2024-04-24 10:28:17.196433] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.196438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.198092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.207524] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.207813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.208034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.208043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.080 [2024-04-24 10:28:17.208050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.080 [2024-04-24 10:28:17.208182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.080 [2024-04-24 10:28:17.208326] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.080 [2024-04-24 10:28:17.208333] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.208339] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.209958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.219500] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.219801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.220057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.220102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.080 [2024-04-24 10:28:17.220124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.080 [2024-04-24 10:28:17.220455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.080 [2024-04-24 10:28:17.220640] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.080 [2024-04-24 10:28:17.220648] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.220654] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.222473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.231414] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.231770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.231992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.232022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.080 [2024-04-24 10:28:17.232043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.080 [2024-04-24 10:28:17.232389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.080 [2024-04-24 10:28:17.232607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.080 [2024-04-24 10:28:17.232615] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.232621] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.234474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.243258] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.243704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.243980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.244011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.080 [2024-04-24 10:28:17.244033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.080 [2024-04-24 10:28:17.244527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.080 [2024-04-24 10:28:17.244645] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.080 [2024-04-24 10:28:17.244653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.244659] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.246356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.255126] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.255570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.255883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.255913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.080 [2024-04-24 10:28:17.255935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.080 [2024-04-24 10:28:17.256280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.080 [2024-04-24 10:28:17.256647] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.080 [2024-04-24 10:28:17.256655] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.256661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.258317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.266995] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.267482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.267782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.267811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.080 [2024-04-24 10:28:17.267832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.080 [2024-04-24 10:28:17.268327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.080 [2024-04-24 10:28:17.268435] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.080 [2024-04-24 10:28:17.268442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.268448] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.270099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.278842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.279293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.279519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.279528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.080 [2024-04-24 10:28:17.279535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.080 [2024-04-24 10:28:17.279628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.080 [2024-04-24 10:28:17.279693] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.080 [2024-04-24 10:28:17.279700] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.279706] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.281388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.290715] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.291124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.291461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.080 [2024-04-24 10:28:17.291491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.080 [2024-04-24 10:28:17.291512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.080 [2024-04-24 10:28:17.291876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.080 [2024-04-24 10:28:17.291997] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.080 [2024-04-24 10:28:17.292004] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.080 [2024-04-24 10:28:17.292009] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.080 [2024-04-24 10:28:17.293754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.080 [2024-04-24 10:28:17.302384] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.080 [2024-04-24 10:28:17.302707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.302960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.302990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.081 [2024-04-24 10:28:17.303011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.081 [2024-04-24 10:28:17.303443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.081 [2024-04-24 10:28:17.303513] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.081 [2024-04-24 10:28:17.303520] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.081 [2024-04-24 10:28:17.303526] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.081 [2024-04-24 10:28:17.305259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.081 [2024-04-24 10:28:17.314245] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.081 [2024-04-24 10:28:17.314572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.314872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.314903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.081 [2024-04-24 10:28:17.314923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.081 [2024-04-24 10:28:17.315418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.081 [2024-04-24 10:28:17.315547] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.081 [2024-04-24 10:28:17.315554] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.081 [2024-04-24 10:28:17.315560] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.081 [2024-04-24 10:28:17.317237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.081 [2024-04-24 10:28:17.326167] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.081 [2024-04-24 10:28:17.326592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.326848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.326877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.081 [2024-04-24 10:28:17.326898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.081 [2024-04-24 10:28:17.327241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.081 [2024-04-24 10:28:17.327673] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.081 [2024-04-24 10:28:17.327695] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.081 [2024-04-24 10:28:17.327728] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.081 [2024-04-24 10:28:17.329473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.081 [2024-04-24 10:28:17.337925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.081 [2024-04-24 10:28:17.338392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.338684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.338696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.081 [2024-04-24 10:28:17.338703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.081 [2024-04-24 10:28:17.338846] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.081 [2024-04-24 10:28:17.338945] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.081 [2024-04-24 10:28:17.338952] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.081 [2024-04-24 10:28:17.338958] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.081 [2024-04-24 10:28:17.340565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.081 [2024-04-24 10:28:17.349993] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.081 [2024-04-24 10:28:17.350427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.350661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.081 [2024-04-24 10:28:17.350670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.081 [2024-04-24 10:28:17.350677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.081 [2024-04-24 10:28:17.350763] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.081 [2024-04-24 10:28:17.350850] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.081 [2024-04-24 10:28:17.350858] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.081 [2024-04-24 10:28:17.350864] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.081 [2024-04-24 10:28:17.352621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.343 [2024-04-24 10:28:17.361944] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.343 [2024-04-24 10:28:17.362401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.362696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.362726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.343 [2024-04-24 10:28:17.362748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.343 [2024-04-24 10:28:17.363028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.343 [2024-04-24 10:28:17.363186] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.343 [2024-04-24 10:28:17.363195] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.343 [2024-04-24 10:28:17.363201] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.343 [2024-04-24 10:28:17.364984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.343 [2024-04-24 10:28:17.373772] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.343 [2024-04-24 10:28:17.374217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.374541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.374572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.343 [2024-04-24 10:28:17.374600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.343 [2024-04-24 10:28:17.374854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.343 [2024-04-24 10:28:17.374947] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.343 [2024-04-24 10:28:17.374954] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.343 [2024-04-24 10:28:17.374960] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.343 [2024-04-24 10:28:17.376656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.343 [2024-04-24 10:28:17.385608] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.343 [2024-04-24 10:28:17.386026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.386341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.386351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.343 [2024-04-24 10:28:17.386358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.343 [2024-04-24 10:28:17.386456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.343 [2024-04-24 10:28:17.386613] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.343 [2024-04-24 10:28:17.386621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.343 [2024-04-24 10:28:17.386627] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.343 [2024-04-24 10:28:17.388355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.343 [2024-04-24 10:28:17.397318] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.343 [2024-04-24 10:28:17.397767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.398031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.398061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.343 [2024-04-24 10:28:17.398097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.343 [2024-04-24 10:28:17.398527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.343 [2024-04-24 10:28:17.398757] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.343 [2024-04-24 10:28:17.398781] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.343 [2024-04-24 10:28:17.398800] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.343 [2024-04-24 10:28:17.400623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.343 [2024-04-24 10:28:17.409005] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.343 [2024-04-24 10:28:17.409454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.409711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.409720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.343 [2024-04-24 10:28:17.409727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.343 [2024-04-24 10:28:17.409828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.343 [2024-04-24 10:28:17.409942] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.343 [2024-04-24 10:28:17.409949] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.343 [2024-04-24 10:28:17.409955] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.343 [2024-04-24 10:28:17.411679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.343 [2024-04-24 10:28:17.420786] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.343 [2024-04-24 10:28:17.421192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.421494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.343 [2024-04-24 10:28:17.421526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.344 [2024-04-24 10:28:17.421548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.344 [2024-04-24 10:28:17.421878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.344 [2024-04-24 10:28:17.422048] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.344 [2024-04-24 10:28:17.422055] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.344 [2024-04-24 10:28:17.422061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.344 [2024-04-24 10:28:17.423766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.344 [2024-04-24 10:28:17.432568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.344 [2024-04-24 10:28:17.432982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.433338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.433369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.344 [2024-04-24 10:28:17.433390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.344 [2024-04-24 10:28:17.433819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.344 [2024-04-24 10:28:17.434199] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.344 [2024-04-24 10:28:17.434207] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.344 [2024-04-24 10:28:17.434213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 494101 Killed "${NVMF_APP[@]}" "$@" 00:33:04.344 10:28:17 -- host/bdevperf.sh@36 -- # tgt_init 00:33:04.344 10:28:17 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:04.344 10:28:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:04.344 10:28:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:04.344 10:28:17 -- common/autotest_common.sh@10 -- # set +x 00:33:04.344 [2024-04-24 10:28:17.436076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.344 10:28:17 -- nvmf/common.sh@469 -- # nvmfpid=495540 00:33:04.344 10:28:17 -- nvmf/common.sh@470 -- # waitforlisten 495540 00:33:04.344 10:28:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:04.344 10:28:17 -- common/autotest_common.sh@819 -- # '[' -z 495540 ']' 00:33:04.344 10:28:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.344 10:28:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:04.344 10:28:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.344 10:28:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:04.344 10:28:17 -- common/autotest_common.sh@10 -- # set +x 00:33:04.344 [2024-04-24 10:28:17.444711] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.344 [2024-04-24 10:28:17.445127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.445441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.445451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.344 [2024-04-24 10:28:17.445458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.344 [2024-04-24 10:28:17.445575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.344 [2024-04-24 10:28:17.445677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.344 [2024-04-24 10:28:17.445685] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.344 [2024-04-24 10:28:17.445691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.344 [2024-04-24 10:28:17.447617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.344 [2024-04-24 10:28:17.456748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.344 [2024-04-24 10:28:17.457163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.457457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.457467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.344 [2024-04-24 10:28:17.457475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.344 [2024-04-24 10:28:17.457576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.344 [2024-04-24 10:28:17.457677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.344 [2024-04-24 10:28:17.457685] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.344 [2024-04-24 10:28:17.457691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.344 [2024-04-24 10:28:17.459615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.344 [2024-04-24 10:28:17.468784] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.344 [2024-04-24 10:28:17.469218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.469513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.469523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.344 [2024-04-24 10:28:17.469530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.344 [2024-04-24 10:28:17.469646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.344 [2024-04-24 10:28:17.469762] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.344 [2024-04-24 10:28:17.469770] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.344 [2024-04-24 10:28:17.469779] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.344 [2024-04-24 10:28:17.471597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.344 [2024-04-24 10:28:17.480562] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.344 [2024-04-24 10:28:17.480957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.481228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.481239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.344 [2024-04-24 10:28:17.481246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.344 [2024-04-24 10:28:17.481359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.344 [2024-04-24 10:28:17.481473] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.344 [2024-04-24 10:28:17.481481] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.344 [2024-04-24 10:28:17.481487] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.344 [2024-04-24 10:28:17.483323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.344 [2024-04-24 10:28:17.487640] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:04.344 [2024-04-24 10:28:17.487678] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.344 [2024-04-24 10:28:17.492452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.344 [2024-04-24 10:28:17.492802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.493102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.493114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.344 [2024-04-24 10:28:17.493122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.344 [2024-04-24 10:28:17.493255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.344 [2024-04-24 10:28:17.493357] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.344 [2024-04-24 10:28:17.493365] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.344 [2024-04-24 10:28:17.493371] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.344 [2024-04-24 10:28:17.495139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.344 [2024-04-24 10:28:17.504423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.344 [2024-04-24 10:28:17.504815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.505113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.505123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.344 [2024-04-24 10:28:17.505131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.344 [2024-04-24 10:28:17.505230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.344 [2024-04-24 10:28:17.505361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.344 [2024-04-24 10:28:17.505369] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.344 [2024-04-24 10:28:17.505375] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.344 [2024-04-24 10:28:17.506989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.344 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.344 [2024-04-24 10:28:17.516636] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.344 [2024-04-24 10:28:17.517019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.517269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.344 [2024-04-24 10:28:17.517290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.345 [2024-04-24 10:28:17.517297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.345 [2024-04-24 10:28:17.517474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.345 [2024-04-24 10:28:17.517592] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.345 [2024-04-24 10:28:17.517599] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.345 [2024-04-24 10:28:17.517605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.345 [2024-04-24 10:28:17.519395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.345 [2024-04-24 10:28:17.528617] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.345 [2024-04-24 10:28:17.529035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.529333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.529343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.345 [2024-04-24 10:28:17.529350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.345 [2024-04-24 10:28:17.529453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.345 [2024-04-24 10:28:17.529600] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.345 [2024-04-24 10:28:17.529608] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.345 [2024-04-24 10:28:17.529614] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.345 [2024-04-24 10:28:17.531427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.345 [2024-04-24 10:28:17.540543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.345 [2024-04-24 10:28:17.540932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.541153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.541163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.345 [2024-04-24 10:28:17.541171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.345 [2024-04-24 10:28:17.541257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.345 [2024-04-24 10:28:17.541344] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.345 [2024-04-24 10:28:17.541354] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.345 [2024-04-24 10:28:17.541360] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.345 [2024-04-24 10:28:17.543191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.345 [2024-04-24 10:28:17.545786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:04.345 [2024-04-24 10:28:17.552641] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.345 [2024-04-24 10:28:17.553127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.553424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.553435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.345 [2024-04-24 10:28:17.553442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.345 [2024-04-24 10:28:17.553545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.345 [2024-04-24 10:28:17.553677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.345 [2024-04-24 10:28:17.553685] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.345 [2024-04-24 10:28:17.553692] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.345 [2024-04-24 10:28:17.555491] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.345 [2024-04-24 10:28:17.564545] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.345 [2024-04-24 10:28:17.564980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.565196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.565206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.345 [2024-04-24 10:28:17.565214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.345 [2024-04-24 10:28:17.565316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.345 [2024-04-24 10:28:17.565448] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.345 [2024-04-24 10:28:17.565457] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.345 [2024-04-24 10:28:17.565463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.345 [2024-04-24 10:28:17.567306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.345 [2024-04-24 10:28:17.576489] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.345 [2024-04-24 10:28:17.576953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.577250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.577261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.345 [2024-04-24 10:28:17.577268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.345 [2024-04-24 10:28:17.577398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.345 [2024-04-24 10:28:17.577526] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.345 [2024-04-24 10:28:17.577537] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.345 [2024-04-24 10:28:17.577543] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.345 [2024-04-24 10:28:17.579365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.345 [2024-04-24 10:28:17.588429] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.345 [2024-04-24 10:28:17.588878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.589176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.589188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.345 [2024-04-24 10:28:17.589196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.345 [2024-04-24 10:28:17.589284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.345 [2024-04-24 10:28:17.589432] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.345 [2024-04-24 10:28:17.589440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.345 [2024-04-24 10:28:17.589447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.345 [2024-04-24 10:28:17.591087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.345 [2024-04-24 10:28:17.600224] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.345 [2024-04-24 10:28:17.600668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.600887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.600897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.345 [2024-04-24 10:28:17.600905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.345 [2024-04-24 10:28:17.601022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.345 [2024-04-24 10:28:17.601206] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.345 [2024-04-24 10:28:17.601215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.345 [2024-04-24 10:28:17.601221] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.345 [2024-04-24 10:28:17.603031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.345 [2024-04-24 10:28:17.612139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.345 [2024-04-24 10:28:17.612547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.612795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.345 [2024-04-24 10:28:17.612805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.345 [2024-04-24 10:28:17.612812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.345 [2024-04-24 10:28:17.612916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.345 [2024-04-24 10:28:17.613018] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.345 [2024-04-24 10:28:17.613025] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.345 [2024-04-24 10:28:17.613036] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.345 [2024-04-24 10:28:17.614851] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.607 [2024-04-24 10:28:17.624058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.607 [2024-04-24 10:28:17.624450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.624744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.624754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.607 [2024-04-24 10:28:17.624761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.607 [2024-04-24 10:28:17.624863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.607 [2024-04-24 10:28:17.625010] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.607 [2024-04-24 10:28:17.625019] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.607 [2024-04-24 10:28:17.625024] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.607 [2024-04-24 10:28:17.625344] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:04.607 [2024-04-24 10:28:17.625440] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.607 [2024-04-24 10:28:17.625448] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.607 [2024-04-24 10:28:17.625454] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.607 [2024-04-24 10:28:17.625488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:04.607 [2024-04-24 10:28:17.625574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:04.607 [2024-04-24 10:28:17.625575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.607 [2024-04-24 10:28:17.626903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.607 [2024-04-24 10:28:17.636056] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.607 [2024-04-24 10:28:17.636541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.636779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.636789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.607 [2024-04-24 10:28:17.636797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.607 [2024-04-24 10:28:17.636916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.607 [2024-04-24 10:28:17.637033] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.607 [2024-04-24 10:28:17.637042] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.607 [2024-04-24 10:28:17.637049] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.607 [2024-04-24 10:28:17.639174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.607 [2024-04-24 10:28:17.648081] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.607 [2024-04-24 10:28:17.648498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.648742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.648752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.607 [2024-04-24 10:28:17.648767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.607 [2024-04-24 10:28:17.648840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.607 [2024-04-24 10:28:17.648971] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.607 [2024-04-24 10:28:17.648980] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.607 [2024-04-24 10:28:17.648986] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.607 [2024-04-24 10:28:17.650924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.607 [2024-04-24 10:28:17.659986] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.607 [2024-04-24 10:28:17.660343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.660513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.660523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.607 [2024-04-24 10:28:17.660531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.607 [2024-04-24 10:28:17.660619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.607 [2024-04-24 10:28:17.660722] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.607 [2024-04-24 10:28:17.660729] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.607 [2024-04-24 10:28:17.660736] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.607 [2024-04-24 10:28:17.662573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.607 [2024-04-24 10:28:17.671983] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.607 [2024-04-24 10:28:17.672459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.672649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.672659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.607 [2024-04-24 10:28:17.672667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.607 [2024-04-24 10:28:17.672801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.607 [2024-04-24 10:28:17.672902] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.607 [2024-04-24 10:28:17.672910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.607 [2024-04-24 10:28:17.672917] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.607 [2024-04-24 10:28:17.674617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.607 [2024-04-24 10:28:17.684025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.607 [2024-04-24 10:28:17.684437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.684732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.684742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.607 [2024-04-24 10:28:17.684750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.607 [2024-04-24 10:28:17.684908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.607 [2024-04-24 10:28:17.685012] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.607 [2024-04-24 10:28:17.685020] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.607 [2024-04-24 10:28:17.685026] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.607 [2024-04-24 10:28:17.686816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.607 [2024-04-24 10:28:17.695965] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.607 [2024-04-24 10:28:17.696404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.696672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.696683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.607 [2024-04-24 10:28:17.696690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.607 [2024-04-24 10:28:17.696807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.607 [2024-04-24 10:28:17.696909] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.607 [2024-04-24 10:28:17.696918] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.607 [2024-04-24 10:28:17.696925] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.607 [2024-04-24 10:28:17.698696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.607 [2024-04-24 10:28:17.707955] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.607 [2024-04-24 10:28:17.708456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.708746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.607 [2024-04-24 10:28:17.708756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.607 [2024-04-24 10:28:17.708763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.607 [2024-04-24 10:28:17.708880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.607 [2024-04-24 10:28:17.708980] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.607 [2024-04-24 10:28:17.708989] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.607 [2024-04-24 10:28:17.708995] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.710887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.720036] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.608 [2024-04-24 10:28:17.720401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.720667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.720678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.608 [2024-04-24 10:28:17.720685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.608 [2024-04-24 10:28:17.720816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.608 [2024-04-24 10:28:17.720921] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.608 [2024-04-24 10:28:17.720929] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.608 [2024-04-24 10:28:17.720935] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.722888] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.732046] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.608 [2024-04-24 10:28:17.732400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.732671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.732681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.608 [2024-04-24 10:28:17.732688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.608 [2024-04-24 10:28:17.732790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.608 [2024-04-24 10:28:17.732921] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.608 [2024-04-24 10:28:17.732930] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.608 [2024-04-24 10:28:17.732936] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.734843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.744067] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.608 [2024-04-24 10:28:17.744383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.744674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.744684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.608 [2024-04-24 10:28:17.744691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.608 [2024-04-24 10:28:17.744808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.608 [2024-04-24 10:28:17.744939] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.608 [2024-04-24 10:28:17.744948] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.608 [2024-04-24 10:28:17.744954] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.746679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.756122] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.608 [2024-04-24 10:28:17.756558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.756774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.756784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.608 [2024-04-24 10:28:17.756791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.608 [2024-04-24 10:28:17.756877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.608 [2024-04-24 10:28:17.756993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.608 [2024-04-24 10:28:17.757004] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.608 [2024-04-24 10:28:17.757011] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.758814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.768093] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.608 [2024-04-24 10:28:17.768484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.768774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.768784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.608 [2024-04-24 10:28:17.768790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.608 [2024-04-24 10:28:17.768937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.608 [2024-04-24 10:28:17.769054] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.608 [2024-04-24 10:28:17.769063] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.608 [2024-04-24 10:28:17.769069] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.770856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.780139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.608 [2024-04-24 10:28:17.780513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.780730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.780740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.608 [2024-04-24 10:28:17.780747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.608 [2024-04-24 10:28:17.780879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.608 [2024-04-24 10:28:17.780995] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.608 [2024-04-24 10:28:17.781004] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.608 [2024-04-24 10:28:17.781010] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.782933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.792064] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.608 [2024-04-24 10:28:17.792508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.792712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.792722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.608 [2024-04-24 10:28:17.792729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.608 [2024-04-24 10:28:17.792800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.608 [2024-04-24 10:28:17.792932] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.608 [2024-04-24 10:28:17.792939] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.608 [2024-04-24 10:28:17.792948] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.794707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.804052] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.608 [2024-04-24 10:28:17.804507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.804801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.804811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.608 [2024-04-24 10:28:17.804818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.608 [2024-04-24 10:28:17.804935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.608 [2024-04-24 10:28:17.805086] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.608 [2024-04-24 10:28:17.805095] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.608 [2024-04-24 10:28:17.805101] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.807081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.815967] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.608 [2024-04-24 10:28:17.816275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.816502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.608 [2024-04-24 10:28:17.816512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.608 [2024-04-24 10:28:17.816520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.608 [2024-04-24 10:28:17.816636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.608 [2024-04-24 10:28:17.816753] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.608 [2024-04-24 10:28:17.816762] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.608 [2024-04-24 10:28:17.816767] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.608 [2024-04-24 10:28:17.818570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.608 [2024-04-24 10:28:17.827974] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.609 [2024-04-24 10:28:17.828361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.828675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.828685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.609 [2024-04-24 10:28:17.828692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.609 [2024-04-24 10:28:17.828808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.609 [2024-04-24 10:28:17.828939] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.609 [2024-04-24 10:28:17.828947] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.609 [2024-04-24 10:28:17.828953] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.609 [2024-04-24 10:28:17.830754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.609 [2024-04-24 10:28:17.840332] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.609 [2024-04-24 10:28:17.840720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.841014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.841024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.609 [2024-04-24 10:28:17.841031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.609 [2024-04-24 10:28:17.841139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.609 [2024-04-24 10:28:17.841241] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.609 [2024-04-24 10:28:17.841249] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.609 [2024-04-24 10:28:17.841255] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.609 [2024-04-24 10:28:17.843160] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.609 [2024-04-24 10:28:17.852399] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.609 [2024-04-24 10:28:17.852819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.853119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.853130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.609 [2024-04-24 10:28:17.853137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.609 [2024-04-24 10:28:17.853256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.609 [2024-04-24 10:28:17.853358] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.609 [2024-04-24 10:28:17.853366] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.609 [2024-04-24 10:28:17.853373] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.609 [2024-04-24 10:28:17.855340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.609 [2024-04-24 10:28:17.864219] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.609 [2024-04-24 10:28:17.864566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.864857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.864867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.609 [2024-04-24 10:28:17.864874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.609 [2024-04-24 10:28:17.864993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.609 [2024-04-24 10:28:17.865085] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.609 [2024-04-24 10:28:17.865094] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.609 [2024-04-24 10:28:17.865100] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.609 [2024-04-24 10:28:17.866793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.609 [2024-04-24 10:28:17.876132] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.609 [2024-04-24 10:28:17.876602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.876824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.609 [2024-04-24 10:28:17.876835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.609 [2024-04-24 10:28:17.876842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.609 [2024-04-24 10:28:17.876958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.609 [2024-04-24 10:28:17.877095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.609 [2024-04-24 10:28:17.877104] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.609 [2024-04-24 10:28:17.877110] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.609 [2024-04-24 10:28:17.878943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.870 [2024-04-24 10:28:17.888220] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.870 [2024-04-24 10:28:17.888645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.870 [2024-04-24 10:28:17.888860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.870 [2024-04-24 10:28:17.888871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.870 [2024-04-24 10:28:17.888877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.870 [2024-04-24 10:28:17.888994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.870 [2024-04-24 10:28:17.889116] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.870 [2024-04-24 10:28:17.889125] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.870 [2024-04-24 10:28:17.889131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.870 [2024-04-24 10:28:17.890898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.870 [2024-04-24 10:28:17.900195] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.870 [2024-04-24 10:28:17.900581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.870 [2024-04-24 10:28:17.900805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.870 [2024-04-24 10:28:17.900815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.870 [2024-04-24 10:28:17.900822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.870 [2024-04-24 10:28:17.900908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.870 [2024-04-24 10:28:17.900978] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.870 [2024-04-24 10:28:17.900986] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.870 [2024-04-24 10:28:17.900992] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.870 [2024-04-24 10:28:17.902690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.870 [2024-04-24 10:28:17.912248] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.870 [2024-04-24 10:28:17.912605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.870 [2024-04-24 10:28:17.912899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.870 [2024-04-24 10:28:17.912910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.870 [2024-04-24 10:28:17.912917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.870 [2024-04-24 10:28:17.913064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:17.913218] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:17.913227] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:17.913233] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:17.915169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.871 [2024-04-24 10:28:17.924151] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.871 [2024-04-24 10:28:17.924541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.924833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.924843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.871 [2024-04-24 10:28:17.924850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.871 [2024-04-24 10:28:17.924967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:17.925068] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:17.925082] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:17.925088] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:17.926901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.871 [2024-04-24 10:28:17.936188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.871 [2024-04-24 10:28:17.936677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.936902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.936912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.871 [2024-04-24 10:28:17.936919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.871 [2024-04-24 10:28:17.937021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:17.937173] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:17.937182] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:17.937188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:17.938925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.871 [2024-04-24 10:28:17.948229] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.871 [2024-04-24 10:28:17.948557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.948778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.948788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.871 [2024-04-24 10:28:17.948799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.871 [2024-04-24 10:28:17.948930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:17.949061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:17.949074] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:17.949080] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:17.950802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.871 [2024-04-24 10:28:17.960344] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.871 [2024-04-24 10:28:17.960719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.960943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.960954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.871 [2024-04-24 10:28:17.960960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.871 [2024-04-24 10:28:17.961046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:17.961153] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:17.961161] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:17.961167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:17.962905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.871 [2024-04-24 10:28:17.972552] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.871 [2024-04-24 10:28:17.972966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.973188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.973199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.871 [2024-04-24 10:28:17.973206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.871 [2024-04-24 10:28:17.973308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:17.973410] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:17.973417] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:17.973424] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:17.975271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.871 [2024-04-24 10:28:17.984664] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.871 [2024-04-24 10:28:17.985042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.985269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.985280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.871 [2024-04-24 10:28:17.985287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.871 [2024-04-24 10:28:17.985391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:17.985492] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:17.985501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:17.985507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:17.987233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.871 [2024-04-24 10:28:17.996693] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.871 [2024-04-24 10:28:17.997008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.997242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:17.997253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.871 [2024-04-24 10:28:17.997260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.871 [2024-04-24 10:28:17.997407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:17.997523] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:17.997531] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:17.997538] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:17.999432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.871 [2024-04-24 10:28:18.008701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.871 [2024-04-24 10:28:18.009062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:18.009327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:18.009339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.871 [2024-04-24 10:28:18.009346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.871 [2024-04-24 10:28:18.009478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:18.009594] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:18.009602] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:18.009608] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:18.011264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.871 [2024-04-24 10:28:18.020575] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.871 [2024-04-24 10:28:18.020999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:18.021173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.871 [2024-04-24 10:28:18.021184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.871 [2024-04-24 10:28:18.021190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.871 [2024-04-24 10:28:18.021277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.871 [2024-04-24 10:28:18.021397] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.871 [2024-04-24 10:28:18.021405] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.871 [2024-04-24 10:28:18.021411] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.871 [2024-04-24 10:28:18.023230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.032640] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.033019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.033249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.033261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.033268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.872 [2024-04-24 10:28:18.033370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.872 [2024-04-24 10:28:18.033488] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.872 [2024-04-24 10:28:18.033496] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.872 [2024-04-24 10:28:18.033502] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.872 [2024-04-24 10:28:18.035322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.044946] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.045294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.045513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.045524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.045531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.872 [2024-04-24 10:28:18.045663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.872 [2024-04-24 10:28:18.045765] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.872 [2024-04-24 10:28:18.045773] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.872 [2024-04-24 10:28:18.045779] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.872 [2024-04-24 10:28:18.047688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.056932] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.057286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.057454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.057464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.057471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.872 [2024-04-24 10:28:18.057618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.872 [2024-04-24 10:28:18.057764] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.872 [2024-04-24 10:28:18.057776] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.872 [2024-04-24 10:28:18.057782] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.872 [2024-04-24 10:28:18.059645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.068895] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.069175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.069392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.069403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.069409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.872 [2024-04-24 10:28:18.069511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.872 [2024-04-24 10:28:18.069628] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.872 [2024-04-24 10:28:18.069636] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.872 [2024-04-24 10:28:18.069642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.872 [2024-04-24 10:28:18.071535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.080917] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.081342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.081518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.081528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.081535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.872 [2024-04-24 10:28:18.081622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.872 [2024-04-24 10:28:18.081724] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.872 [2024-04-24 10:28:18.081731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.872 [2024-04-24 10:28:18.081737] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.872 [2024-04-24 10:28:18.083449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.092886] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.093361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.093540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.093551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.093558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.872 [2024-04-24 10:28:18.093705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.872 [2024-04-24 10:28:18.093835] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.872 [2024-04-24 10:28:18.093844] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.872 [2024-04-24 10:28:18.093853] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.872 [2024-04-24 10:28:18.095714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.104873] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.105296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.105519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.105529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.105536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.872 [2024-04-24 10:28:18.105638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.872 [2024-04-24 10:28:18.105770] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.872 [2024-04-24 10:28:18.105779] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.872 [2024-04-24 10:28:18.105785] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.872 [2024-04-24 10:28:18.107602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.116949] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.117317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.117535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.117546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.117553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.872 [2024-04-24 10:28:18.117654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.872 [2024-04-24 10:28:18.117725] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.872 [2024-04-24 10:28:18.117733] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.872 [2024-04-24 10:28:18.117739] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.872 [2024-04-24 10:28:18.119486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.129001] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.129300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.129475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.129486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.129492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.872 [2024-04-24 10:28:18.129624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.872 [2024-04-24 10:28:18.129695] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.872 [2024-04-24 10:28:18.129703] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.872 [2024-04-24 10:28:18.129709] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.872 [2024-04-24 10:28:18.131576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:04.872 [2024-04-24 10:28:18.140999] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:04.872 [2024-04-24 10:28:18.141269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.141491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.872 [2024-04-24 10:28:18.141501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:04.872 [2024-04-24 10:28:18.141508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:04.873 [2024-04-24 10:28:18.141640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:04.873 [2024-04-24 10:28:18.141756] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:04.873 [2024-04-24 10:28:18.141764] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:04.873 [2024-04-24 10:28:18.141770] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:04.873 [2024-04-24 10:28:18.143592] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.133 [2024-04-24 10:28:18.153097] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.133 [2024-04-24 10:28:18.153487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.133 [2024-04-24 10:28:18.153693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.133 [2024-04-24 10:28:18.153703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.134 [2024-04-24 10:28:18.153710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.134 [2024-04-24 10:28:18.153781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.134 [2024-04-24 10:28:18.153912] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.134 [2024-04-24 10:28:18.153920] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.134 [2024-04-24 10:28:18.153926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.134 [2024-04-24 10:28:18.155743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.134 [2024-04-24 10:28:18.165293] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.134 [2024-04-24 10:28:18.165666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.165878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.165888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.134 [2024-04-24 10:28:18.165895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.134 [2024-04-24 10:28:18.166027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.134 [2024-04-24 10:28:18.166164] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.134 [2024-04-24 10:28:18.166172] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.134 [2024-04-24 10:28:18.166178] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.134 [2024-04-24 10:28:18.168037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.134 [2024-04-24 10:28:18.177325] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.134 [2024-04-24 10:28:18.177616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.177794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.177804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.134 [2024-04-24 10:28:18.177811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.134 [2024-04-24 10:28:18.177973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.134 [2024-04-24 10:28:18.178119] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.134 [2024-04-24 10:28:18.178128] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.134 [2024-04-24 10:28:18.178134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.134 [2024-04-24 10:28:18.180085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.134 [2024-04-24 10:28:18.189291] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.134 [2024-04-24 10:28:18.189708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.189937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.189947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.134 [2024-04-24 10:28:18.189954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.134 [2024-04-24 10:28:18.190091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.134 [2024-04-24 10:28:18.190223] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.134 [2024-04-24 10:28:18.190231] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.134 [2024-04-24 10:28:18.190237] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.134 [2024-04-24 10:28:18.192020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.134 [2024-04-24 10:28:18.201250] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.134 [2024-04-24 10:28:18.201537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.201757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.201768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.134 [2024-04-24 10:28:18.201775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.134 [2024-04-24 10:28:18.201891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.134 [2024-04-24 10:28:18.202009] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.134 [2024-04-24 10:28:18.202017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.134 [2024-04-24 10:28:18.202023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.134 [2024-04-24 10:28:18.203814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.134 [2024-04-24 10:28:18.213364] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.134 [2024-04-24 10:28:18.213763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.213937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.213949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.134 [2024-04-24 10:28:18.213956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.134 [2024-04-24 10:28:18.214079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.134 [2024-04-24 10:28:18.214181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.134 [2024-04-24 10:28:18.214189] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.134 [2024-04-24 10:28:18.214195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.134 [2024-04-24 10:28:18.215949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.134 [2024-04-24 10:28:18.225338] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.134 [2024-04-24 10:28:18.225649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.225874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.225885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.134 [2024-04-24 10:28:18.225892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.134 [2024-04-24 10:28:18.226040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.134 [2024-04-24 10:28:18.226147] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.134 [2024-04-24 10:28:18.226155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.134 [2024-04-24 10:28:18.226162] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.134 [2024-04-24 10:28:18.227978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.134 [2024-04-24 10:28:18.237255] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.134 [2024-04-24 10:28:18.237637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.237809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.237820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.134 [2024-04-24 10:28:18.237827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.134 [2024-04-24 10:28:18.237930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.134 [2024-04-24 10:28:18.238062] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.134 [2024-04-24 10:28:18.238076] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.134 [2024-04-24 10:28:18.238082] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.134 [2024-04-24 10:28:18.239850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.134 [2024-04-24 10:28:18.249383] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.134 [2024-04-24 10:28:18.249729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.249944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.249960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.134 [2024-04-24 10:28:18.249967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.134 [2024-04-24 10:28:18.250089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.134 [2024-04-24 10:28:18.250222] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.134 [2024-04-24 10:28:18.250230] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.134 [2024-04-24 10:28:18.250236] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.134 [2024-04-24 10:28:18.252111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.134 [2024-04-24 10:28:18.261331] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.134 [2024-04-24 10:28:18.261672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.134 [2024-04-24 10:28:18.261900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.261911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.135 [2024-04-24 10:28:18.261918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.135 [2024-04-24 10:28:18.262065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.135 [2024-04-24 10:28:18.262201] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.135 [2024-04-24 10:28:18.262212] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.135 [2024-04-24 10:28:18.262219] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.135 [2024-04-24 10:28:18.263988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.135 [2024-04-24 10:28:18.273461] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.135 [2024-04-24 10:28:18.273841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.274052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.274061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.135 [2024-04-24 10:28:18.274068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.135 [2024-04-24 10:28:18.274191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.135 [2024-04-24 10:28:18.274321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.135 [2024-04-24 10:28:18.274330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.135 [2024-04-24 10:28:18.274336] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.135 [2024-04-24 10:28:18.276119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.135 [2024-04-24 10:28:18.285367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.135 [2024-04-24 10:28:18.285603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.285755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.285765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.135 [2024-04-24 10:28:18.285775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.135 [2024-04-24 10:28:18.285922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.135 [2024-04-24 10:28:18.286024] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.135 [2024-04-24 10:28:18.286032] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.135 [2024-04-24 10:28:18.286039] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.135 [2024-04-24 10:28:18.287839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.135 [2024-04-24 10:28:18.297284] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.135 [2024-04-24 10:28:18.297702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.297926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.297937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.135 [2024-04-24 10:28:18.297943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.135 [2024-04-24 10:28:18.298045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.135 [2024-04-24 10:28:18.298166] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.135 [2024-04-24 10:28:18.298175] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.135 [2024-04-24 10:28:18.298181] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.135 [2024-04-24 10:28:18.300006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.135 10:28:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:05.135 10:28:18 -- common/autotest_common.sh@852 -- # return 0 00:33:05.135 10:28:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:05.135 10:28:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:05.135 10:28:18 -- common/autotest_common.sh@10 -- # set +x 00:33:05.135 [2024-04-24 10:28:18.309482] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.135 [2024-04-24 10:28:18.309944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.310191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.310203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.135 [2024-04-24 10:28:18.310210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.135 [2024-04-24 10:28:18.310313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.135 [2024-04-24 10:28:18.310445] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.135 [2024-04-24 10:28:18.310454] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.135 [2024-04-24 10:28:18.310460] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.135 [2024-04-24 10:28:18.312366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.135 [2024-04-24 10:28:18.321510] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.135 [2024-04-24 10:28:18.321956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.322128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.322142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.135 [2024-04-24 10:28:18.322149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.135 [2024-04-24 10:28:18.322281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.135 [2024-04-24 10:28:18.322428] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.135 [2024-04-24 10:28:18.322435] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.135 [2024-04-24 10:28:18.322442] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.135 [2024-04-24 10:28:18.324158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.135 [2024-04-24 10:28:18.333454] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.135 [2024-04-24 10:28:18.333726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.333992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.334003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.135 [2024-04-24 10:28:18.334011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.135 [2024-04-24 10:28:18.334117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.135 [2024-04-24 10:28:18.334251] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.135 [2024-04-24 10:28:18.334259] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.135 [2024-04-24 10:28:18.334265] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.135 [2024-04-24 10:28:18.335911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.135 10:28:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.135 10:28:18 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:05.135 10:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.135 10:28:18 -- common/autotest_common.sh@10 -- # set +x 00:33:05.135 [2024-04-24 10:28:18.341684] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.135 [2024-04-24 10:28:18.345378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.135 [2024-04-24 10:28:18.345780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.346003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.346014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.135 [2024-04-24 10:28:18.346020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.135 [2024-04-24 10:28:18.346157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.135 [2024-04-24 10:28:18.346274] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.135 [2024-04-24 10:28:18.346282] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.135 [2024-04-24 10:28:18.346288] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.135 10:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.135 10:28:18 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:05.135 10:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.135 10:28:18 -- common/autotest_common.sh@10 -- # set +x 00:33:05.135 [2024-04-24 10:28:18.347963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.135 [2024-04-24 10:28:18.357427] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.135 [2024-04-24 10:28:18.357862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.358159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.135 [2024-04-24 10:28:18.358172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.135 [2024-04-24 10:28:18.358179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.135 [2024-04-24 10:28:18.358297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.135 [2024-04-24 10:28:18.358398] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.136 [2024-04-24 10:28:18.358407] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.136 [2024-04-24 10:28:18.358413] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.136 [2024-04-24 10:28:18.360110] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.136 [2024-04-24 10:28:18.369325] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.136 [2024-04-24 10:28:18.369747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.136 [2024-04-24 10:28:18.370035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.136 [2024-04-24 10:28:18.370046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.136 [2024-04-24 10:28:18.370053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.136 [2024-04-24 10:28:18.370190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.136 [2024-04-24 10:28:18.370282] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.136 [2024-04-24 10:28:18.370291] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.136 [2024-04-24 10:28:18.370297] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.136 [2024-04-24 10:28:18.371962] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.136 Malloc0 00:33:05.136 10:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.136 10:28:18 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:05.136 10:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.136 10:28:18 -- common/autotest_common.sh@10 -- # set +x 00:33:05.136 [2024-04-24 10:28:18.381374] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.136 [2024-04-24 10:28:18.381777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.136 [2024-04-24 10:28:18.382055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.136 [2024-04-24 10:28:18.382066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.136 [2024-04-24 10:28:18.382077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.136 [2024-04-24 10:28:18.382210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.136 [2024-04-24 10:28:18.382296] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.136 [2024-04-24 10:28:18.382304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.136 [2024-04-24 10:28:18.382315] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.136 [2024-04-24 10:28:18.384132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.136 10:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.136 10:28:18 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:05.136 10:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.136 10:28:18 -- common/autotest_common.sh@10 -- # set +x 00:33:05.136 [2024-04-24 10:28:18.393361] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.136 [2024-04-24 10:28:18.393790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.136 [2024-04-24 10:28:18.394083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.136 [2024-04-24 10:28:18.394094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80a900 with addr=10.0.0.2, port=4420 00:33:05.136 [2024-04-24 10:28:18.394102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80a900 is same with the state(5) to be set 00:33:05.136 [2024-04-24 10:28:18.394203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a900 (9): Bad file descriptor 00:33:05.136 [2024-04-24 10:28:18.394290] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.136 [2024-04-24 10:28:18.394297] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.136 [2024-04-24 10:28:18.394303] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.136 [2024-04-24 10:28:18.396119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.136 10:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.136 10:28:18 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.136 10:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.136 10:28:18 -- common/autotest_common.sh@10 -- # set +x 00:33:05.136 [2024-04-24 10:28:18.403401] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.136 [2024-04-24 10:28:18.405305] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.136 10:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.136 10:28:18 -- host/bdevperf.sh@38 -- # wait 494593 00:33:05.395 [2024-04-24 10:28:18.433346] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:15.375 00:33:15.375 Latency(us) 00:33:15.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.375 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:15.375 Verification LBA range: start 0x0 length 0x4000 00:33:15.375 Nvme1n1 : 15.00 12357.31 48.27 19131.82 0.00 4053.27 872.63 16184.54 00:33:15.375 =================================================================================================================== 00:33:15.375 Total : 12357.31 48.27 19131.82 0.00 4053.27 872.63 16184.54 00:33:15.375 10:28:27 -- host/bdevperf.sh@39 -- # sync 00:33:15.375 10:28:27 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:15.375 10:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.375 10:28:27 -- common/autotest_common.sh@10 -- # set +x 00:33:15.375 10:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.375 10:28:27 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:15.375 10:28:27 -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:15.375 10:28:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:15.375 10:28:27 -- nvmf/common.sh@116 -- # sync 00:33:15.375 10:28:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:15.375 10:28:27 -- nvmf/common.sh@119 -- # set +e 00:33:15.375 10:28:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:15.375 10:28:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:15.375 rmmod nvme_tcp 00:33:15.375 rmmod nvme_fabrics 00:33:15.375 rmmod nvme_keyring 00:33:15.375 10:28:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:15.375 10:28:27 -- nvmf/common.sh@123 -- # set -e 00:33:15.375 10:28:27 -- nvmf/common.sh@124 -- # return 0 00:33:15.375 10:28:27 -- nvmf/common.sh@477 -- # '[' -n 495540 ']' 00:33:15.375 10:28:27 -- nvmf/common.sh@478 -- # killprocess 495540 00:33:15.375 10:28:27 -- common/autotest_common.sh@926 -- # '[' -z 495540 ']' 00:33:15.375 10:28:27 -- common/autotest_common.sh@930 -- # kill -0 495540 00:33:15.375 10:28:27 -- common/autotest_common.sh@931 -- # uname 00:33:15.375 10:28:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:15.375 10:28:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 495540 00:33:15.375 10:28:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:15.375 10:28:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:15.375 10:28:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 495540' 00:33:15.375 killing process with pid 495540 00:33:15.375 10:28:27 -- common/autotest_common.sh@945 -- # kill 495540 00:33:15.375 10:28:27 -- common/autotest_common.sh@950 -- # wait 495540 00:33:15.375 10:28:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:15.375 10:28:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:15.375 10:28:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:15.375 10:28:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:15.375 10:28:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:15.375 10:28:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.375 10:28:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:15.375 10:28:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.313 10:28:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:16.313 00:33:16.313 real 0m25.889s 00:33:16.313 user 1m3.163s 00:33:16.313 sys 0m6.069s 00:33:16.313 10:28:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:16.313 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:33:16.313 ************************************ 00:33:16.313 END TEST nvmf_bdevperf 00:33:16.313 ************************************ 00:33:16.313 10:28:29 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:16.313 10:28:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:16.313 10:28:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:16.313 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:33:16.313 ************************************ 00:33:16.313 START TEST nvmf_target_disconnect 00:33:16.313 ************************************ 00:33:16.313 10:28:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:16.572 * Looking for test storage... 00:33:16.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:16.572 10:28:29 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.572 10:28:29 -- nvmf/common.sh@7 -- # uname -s 00:33:16.572 10:28:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.572 10:28:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.572 10:28:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.572 10:28:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.572 10:28:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.572 10:28:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.572 10:28:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.572 10:28:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.572 10:28:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.572 10:28:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.572 10:28:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:16.572 10:28:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:16.572 10:28:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.572 10:28:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.572 10:28:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.572 10:28:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.572 10:28:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.572 10:28:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.572 10:28:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.572 10:28:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.572 10:28:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.572 10:28:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.572 10:28:29 -- paths/export.sh@5 -- # export PATH 00:33:16.572 10:28:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.572 10:28:29 -- nvmf/common.sh@46 -- # : 0 00:33:16.572 10:28:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:16.572 10:28:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:16.572 10:28:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:16.572 10:28:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.572 10:28:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.572 10:28:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:16.572 10:28:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:16.572 10:28:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:16.572 10:28:29 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:16.572 10:28:29 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:16.572 10:28:29 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:16.572 10:28:29 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:33:16.572 10:28:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:16.572 10:28:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:16.572 10:28:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:16.572 10:28:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:16.572 10:28:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:16.572 10:28:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.572 10:28:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:16.572 10:28:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.572 10:28:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:16.572 10:28:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:16.572 10:28:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:16.572 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:33:21.846 10:28:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:21.846 10:28:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:21.846 10:28:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:21.846 10:28:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:21.846 10:28:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:21.846 10:28:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:21.846 10:28:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:21.846 10:28:34 -- nvmf/common.sh@294 -- # net_devs=() 00:33:21.846 10:28:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:21.846 10:28:34 -- nvmf/common.sh@295 -- # e810=() 00:33:21.846 10:28:34 -- nvmf/common.sh@295 -- # local -ga e810 00:33:21.846 10:28:34 -- nvmf/common.sh@296 -- # x722=() 00:33:21.846 10:28:34 -- nvmf/common.sh@296 -- # local -ga x722 00:33:21.846 10:28:34 -- nvmf/common.sh@297 -- # mlx=() 00:33:21.846 10:28:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:21.846 10:28:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.846 10:28:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.846 10:28:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.846 10:28:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.847 10:28:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.847 10:28:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.847 10:28:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.847 10:28:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.847 10:28:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.847 10:28:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.847 10:28:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.847 10:28:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:21.847 10:28:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:21.847 10:28:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:21.847 10:28:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:21.847 10:28:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:21.847 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:21.847 10:28:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:21.847 10:28:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:21.847 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:21.847 10:28:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:21.847 10:28:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:21.847 10:28:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.847 10:28:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:21.847 10:28:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.847 10:28:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:21.847 Found net devices under 0000:86:00.0: cvl_0_0 00:33:21.847 10:28:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.847 10:28:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:21.847 10:28:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.847 10:28:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:21.847 10:28:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.847 10:28:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:21.847 Found net devices under 0000:86:00.1: cvl_0_1 00:33:21.847 10:28:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.847 10:28:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:21.847 10:28:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:21.847 10:28:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:21.847 10:28:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.847 10:28:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.847 10:28:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.847 10:28:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:21.847 10:28:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.847 10:28:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.847 10:28:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:21.847 10:28:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.847 10:28:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.847 10:28:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:21.847 10:28:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:21.847 10:28:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.847 10:28:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.847 10:28:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.847 10:28:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.847 10:28:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:21.847 10:28:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.847 10:28:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.847 10:28:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.847 10:28:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:21.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:33:21.847 00:33:21.847 --- 10.0.0.2 ping statistics --- 00:33:21.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.847 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:33:21.847 10:28:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:33:21.847 00:33:21.847 --- 10.0.0.1 ping statistics --- 00:33:21.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.847 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:33:21.847 10:28:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.847 10:28:34 -- nvmf/common.sh@410 -- # return 0 00:33:21.847 10:28:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:21.847 10:28:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.847 10:28:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:21.847 10:28:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.847 10:28:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:21.847 10:28:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:21.847 10:28:34 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:21.847 10:28:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:21.847 10:28:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:21.847 10:28:34 -- common/autotest_common.sh@10 -- # set +x 00:33:21.847 ************************************ 00:33:21.847 START TEST nvmf_target_disconnect_tc1 00:33:21.847 ************************************ 00:33:21.847 10:28:34 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:33:21.847 10:28:34 -- host/target_disconnect.sh@32 -- # set +e 00:33:21.847 10:28:34 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:21.847 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.847 [2024-04-24 10:28:35.036479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.847 [2024-04-24 10:28:35.036895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.847 [2024-04-24 10:28:35.036931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6610 with addr=10.0.0.2, port=4420 00:33:21.847 [2024-04-24 10:28:35.036981] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:21.847 [2024-04-24 10:28:35.037022] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:21.847 [2024-04-24 10:28:35.037028] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:21.847 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:21.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:21.847 Initializing NVMe Controllers 00:33:21.847 10:28:35 -- host/target_disconnect.sh@33 -- # trap - ERR 00:33:21.847 10:28:35 -- host/target_disconnect.sh@33 -- # print_backtrace 00:33:21.847 10:28:35 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:33:21.847 10:28:35 -- common/autotest_common.sh@1132 -- # return 0 00:33:21.847 10:28:35 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:33:21.847 10:28:35 -- host/target_disconnect.sh@41 -- # set -e 00:33:21.847 00:33:21.847 real 0m0.089s 00:33:21.847 user 0m0.040s 00:33:21.847 sys 0m0.047s 00:33:21.847 10:28:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:21.847 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:33:21.847 ************************************ 00:33:21.847 END TEST nvmf_target_disconnect_tc1 00:33:21.847 ************************************ 00:33:21.847 10:28:35 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:21.847 10:28:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:21.847 10:28:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:21.847 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:33:21.847 ************************************ 00:33:21.847 START TEST nvmf_target_disconnect_tc2 00:33:21.847 ************************************ 00:33:21.847 10:28:35 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:33:21.847 10:28:35 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:33:21.847 10:28:35 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:21.847 10:28:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:21.847 10:28:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:21.847 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:33:21.847 10:28:35 -- nvmf/common.sh@469 -- # nvmfpid=500513 00:33:21.847 10:28:35 -- nvmf/common.sh@470 -- # waitforlisten 500513 00:33:21.847 10:28:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:21.847 10:28:35 -- common/autotest_common.sh@819 -- # '[' -z 500513 ']' 00:33:21.847 10:28:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.847 10:28:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:21.847 10:28:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.847 10:28:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:21.847 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:33:22.107 [2024-04-24 10:28:35.132785] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:22.107 [2024-04-24 10:28:35.132824] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.107 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.107 [2024-04-24 10:28:35.202459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:22.107 [2024-04-24 10:28:35.279217] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:22.107 [2024-04-24 10:28:35.279353] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.107 [2024-04-24 10:28:35.279361] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.107 [2024-04-24 10:28:35.279368] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.107 [2024-04-24 10:28:35.279479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:22.107 [2024-04-24 10:28:35.279586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:22.108 [2024-04-24 10:28:35.279613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:22.108 [2024-04-24 10:28:35.279614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:22.675 10:28:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:22.675 10:28:35 -- common/autotest_common.sh@852 -- # return 0 00:33:22.675 10:28:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:22.675 10:28:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:22.675 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:33:22.935 10:28:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.935 10:28:35 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:22.935 10:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.935 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:33:22.935 Malloc0 00:33:22.935 10:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.935 10:28:35 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:22.935 10:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.935 10:28:35 -- common/autotest_common.sh@10 -- # set +x 00:33:22.935 [2024-04-24 10:28:35.999497] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.935 10:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.935 10:28:36 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:22.935 10:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.935 10:28:36 -- common/autotest_common.sh@10 -- # set +x 00:33:22.935 10:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.935 10:28:36 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:22.935 10:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.935 10:28:36 -- common/autotest_common.sh@10 -- # set +x 00:33:22.935 10:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.935 10:28:36 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:22.935 10:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.935 10:28:36 -- common/autotest_common.sh@10 -- # set +x 00:33:22.935 [2024-04-24 10:28:36.027736] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.935 10:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.935 10:28:36 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:22.935 10:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.935 10:28:36 -- common/autotest_common.sh@10 -- # set +x 00:33:22.935 10:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.935 10:28:36 -- host/target_disconnect.sh@50 -- # reconnectpid=500761 00:33:22.935 10:28:36 -- host/target_disconnect.sh@52 -- # sleep 2 00:33:22.935 10:28:36 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:22.935 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.846 10:28:38 -- host/target_disconnect.sh@53 -- # kill -9 500513 00:33:24.846 10:28:38 -- host/target_disconnect.sh@55 -- # sleep 2 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 [2024-04-24 10:28:38.054726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 [2024-04-24 10:28:38.054924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Write completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 [2024-04-24 10:28:38.055125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.846 starting I/O failed 00:33:24.846 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Read completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 Write completed with error (sct=0, sc=8) 00:33:24.847 starting I/O failed 00:33:24.847 [2024-04-24 10:28:38.055315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:24.847 [2024-04-24 10:28:38.055510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.055808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.055839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.056122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.056412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.056453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.056732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.057060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.057102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.057326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.057592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.057623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.057844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.058204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.058234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.058489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.058847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.058877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.059097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.059404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.059434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.059621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.059957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.059968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.060213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.060497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.060527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.060810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.061051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.061090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.061377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.061538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.061550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.061817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.062054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.062092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.062412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.062679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.062710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.062951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.063233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.063270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.063540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.063726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.063738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.063953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.064206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.064238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.064495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.064832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.064862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.065114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.065371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.065402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.065595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.065928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.065958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.066259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.066451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.066480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.066729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.066985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.067015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.847 qpair failed and we were unable to recover it. 00:33:24.847 [2024-04-24 10:28:38.067240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.067437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.847 [2024-04-24 10:28:38.067449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.067728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.068021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.068050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.068359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.068665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.068677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.069013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.069252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.069284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.069617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.069899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.069929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.070240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.070448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.070478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.070739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.070997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.071027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.071389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.071563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.071593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.071911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.072210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.072240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.072494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.072775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.072805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.073018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.073297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.073309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.073556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.073834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.073864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.074196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.074458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.074488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.074753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.074993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.075023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.075285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.075591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.075620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.075950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.076236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.076268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.076463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.076718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.076748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.076995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.077254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.077285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.077598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.077969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.077998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.078262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.078519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.078549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.078826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.079066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.079116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.079368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.079670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.079700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.079945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.080149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.080181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.080449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.080698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.080728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.080929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.081191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.081222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.081503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.081828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.081858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.082044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.082384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.082415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.082719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.082867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.082878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.083142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.083344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.083355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.848 qpair failed and we were unable to recover it. 00:33:24.848 [2024-04-24 10:28:38.083664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.848 [2024-04-24 10:28:38.083962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.083992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.084218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.084509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.084539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.084756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.084991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.085021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.085303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.085561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.085591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.085886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.086219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.086251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.086446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.086743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.086773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.087119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.087443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.087472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.087686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.087898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.087928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.088185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.088500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.088531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.088805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.089106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.089137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.089341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.089590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.089601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.089770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.090007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.090037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.090266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.090526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.090555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.090907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.091169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.091212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.091466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.091749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.091779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.092042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.092308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.092339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.092560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.092903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.092933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.093259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.093429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.093439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.093635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.093790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.093820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.094040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.094308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.094339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.094555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.094755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.094785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.094969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.095225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.095257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.095564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.095882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.095912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.096204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.096416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.096457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.096788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.097041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.097079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.097353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.097594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.097608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.097835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.098112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.098126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.098340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.098498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.098509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.098734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.099032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.849 [2024-04-24 10:28:38.099063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.849 qpair failed and we were unable to recover it. 00:33:24.849 [2024-04-24 10:28:38.099269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.099523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.099554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.099818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.100064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.100102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.100360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.100685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.100716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.101019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.101375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.101407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.101655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.101965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.102001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.102313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.102520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.102550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.102759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.103060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.103098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.103360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.103621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.103656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.103987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.104292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.104324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.104524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.104827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.104856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.105080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.105406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.105436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.105649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.105936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.105966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.106219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.106436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.106466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.106662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.106911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.106942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.107204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.107411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.107442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.107741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.107944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.107955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.108252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.108442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.108471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.108824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.109068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.109104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.109368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.109622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.109652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.109981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.110321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.110352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.110661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.110961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.110992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.111255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.111560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.111590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.850 [2024-04-24 10:28:38.111859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.112184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.850 [2024-04-24 10:28:38.112216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.850 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.112501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.112784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.112815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.113082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.113421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.113451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.113713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.113953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.113983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.114299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.114589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.114619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.114920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.115243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.115285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.115557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.115838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.115851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.116141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.116362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.116374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.116603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.116933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.116966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.117228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.117558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.117591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.117945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.118169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.118182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.118339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.118556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.118567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.118786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.119031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.119044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.119272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.119498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.119529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.119797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.120144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.851 [2024-04-24 10:28:38.120183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:24.851 qpair failed and we were unable to recover it. 00:33:24.851 [2024-04-24 10:28:38.120525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.120853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.120865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.121155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.121451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.121463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.121616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.121852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.121864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.122150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.122453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.122487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.122745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.122964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.122976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.123199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.123341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.123351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.123571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.123879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.123891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.124177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.124406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.124418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.124690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.124906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.124917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.125186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.125330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.125342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.125541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.125837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.125848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.126116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.126324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.126337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.126486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.126763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.126775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.126936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.127224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.127236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.127448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.127602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.127614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.127935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.128272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.128303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.128505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.128745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.128775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.129036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.129400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.129431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.129696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.129950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.129962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.130187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.130365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.130380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.130654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.130991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.131032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.131339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.131683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.131696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.131860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.132146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.132159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.120 [2024-04-24 10:28:38.132426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.132643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.120 [2024-04-24 10:28:38.132655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.120 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.132909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.133058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.133098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.133378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.133700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.133731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.134088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.134396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.134427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.134671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.134997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.135027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.135249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.135597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.135627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.135889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.136147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.136180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.136369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.136617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.136647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.136891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.137165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.137198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.137529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.137777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.137807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.138005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.138286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.138317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.138610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.138966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.138996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.139326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.139599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.139632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.139882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.140158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.140190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.140529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.140915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.140945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.141286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.141591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.141622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.141974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.142230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.142261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.142519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.142800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.142830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.143143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.143347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.143377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.143575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.143940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.143970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.144258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.144457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.144488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.144701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.145033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.145063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.145356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.145684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.145715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.146024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.146380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.146412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.146601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.146862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.146892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.147229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.147577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.147607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.147806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.148131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.148164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.148451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.148649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.148660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.148960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.149202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.149234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.121 [2024-04-24 10:28:38.149448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.149715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.121 [2024-04-24 10:28:38.149745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.121 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.150056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.150269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.150300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.150505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.150811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.150841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.151145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.151410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.151441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.151744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.151927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.151957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.152150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.152413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.152443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.152652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.152974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.153005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.153209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.153471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.153501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.153691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.153994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.154024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.154215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.154541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.154571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.154781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.155083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.155096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.155367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.155682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.155713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.156021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.156345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.156377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.156655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.156860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.156890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.157207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.157535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.157565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.157832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.158093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.158125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.158408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.158694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.158725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.159050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.159354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.159385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.159692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.159972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.160002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.160300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.160510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.160546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.160760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.160985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.161015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.161289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.161537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.161567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.161870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.162230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.162263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.162500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.162779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.162810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.163121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.163434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.163465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.163730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.164060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.164100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.164361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.164567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.164597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.164912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.165160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.165192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.165410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.165767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.165796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.122 qpair failed and we were unable to recover it. 00:33:25.122 [2024-04-24 10:28:38.166144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.122 [2024-04-24 10:28:38.166407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.166438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.166645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.166996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.167027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.167251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.167444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.167473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.167731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.167947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.167977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.168253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.168511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.168541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.168853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.169204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.169236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.169436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.169721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.169751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.170033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.170335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.170367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.170709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.170971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.171001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.171320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.171584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.171614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.171915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.172131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.172144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.172374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.172600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.172612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.172777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.173094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.173127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.173499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.173702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.173732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.173982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.174313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.174344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.174564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.174748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.174778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.175039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.175332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.175344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.175567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.175902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.175938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.176262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.176528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.176559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.176878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.177194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.177226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.177424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.177728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.177759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.178091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.178367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.178397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.178676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.178977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.179008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.179288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.179612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.179642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.179945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.180299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.180331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.180595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.180948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.180979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.181313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.181623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.181653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.181916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.182162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.182203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.182418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.182602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.182613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.123 qpair failed and we were unable to recover it. 00:33:25.123 [2024-04-24 10:28:38.182823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.123 [2024-04-24 10:28:38.183105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.183138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.183400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.183708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.183739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.183995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.184304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.184336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.184542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.184725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.184754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.185134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.185308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.185337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.185582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.185767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.185797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.186179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.186511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.186542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.186821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.187150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.187182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.187386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.187624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.187660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.187970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.188292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.188324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.188601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.188800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.188830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.189152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.189359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.189390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.189747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.190123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.190155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.190448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.190722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.190752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.190942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.191259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.191291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.191572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.191933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.191963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.192173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.192484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.192514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.192830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.193062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.193098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.193371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.193630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.193666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.194004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.194327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.194359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.194675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.194979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.195009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.195402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.195578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.195609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.195916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.196247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.196280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.124 qpair failed and we were unable to recover it. 00:33:25.124 [2024-04-24 10:28:38.196546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.124 [2024-04-24 10:28:38.196732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.196762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.197096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.197310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.197340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.197540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.197743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.197773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.198114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.198316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.198346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.198561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.198816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.198860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.199087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.199340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.199371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.199667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.200005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.200035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.200281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.200541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.200572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.200877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.201120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.201153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.201354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.201523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.201553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.201883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.202226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.202257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.202599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.202984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.203015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.203337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.203621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.203651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.203959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.204245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.204276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.204485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.204744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.204787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.205085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.205328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.205341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.205567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.205891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.205921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.206197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.206475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.206505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.206809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.207062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.207105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.207444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.207786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.207816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.208087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.208274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.208304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.208522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.208784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.208814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.209148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.209337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.209367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.209567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.209926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.209956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.210258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.210450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.210481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.210857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.211216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.211248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.211601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.211914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.211944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.212265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.212529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.212560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.212854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.213117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.125 [2024-04-24 10:28:38.213150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.125 qpair failed and we were unable to recover it. 00:33:25.125 [2024-04-24 10:28:38.213452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.213720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.213750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.214068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.214286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.214316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.214522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.214844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.214874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.215178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.215362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.215393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.215657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.215917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.215949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.216233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.216447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.216478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.216749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.217001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.217013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.217260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.217447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.217478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.217824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.218004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.218015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.218225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.218451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.218463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.218689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.219016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.219046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.219407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.219669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.219700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.220015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.220290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.220322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.220547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.220796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.220826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.221130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.221344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.221375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.221639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.221979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.222010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.222264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.222476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.222506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.222791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.223045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.223084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.223298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.223561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.223591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.223948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.224215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.224255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.224505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.224718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.224730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.225015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.225374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.225406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.225613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.225859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.225889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.226159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.226469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.226500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.226846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.227087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.227100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.227417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.227624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.227654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.227995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.228260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.228292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.228510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.228871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.228900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.126 [2024-04-24 10:28:38.229304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.229564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.126 [2024-04-24 10:28:38.229595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.126 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.229865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.230085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.230118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.230405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.230610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.230641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.230917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.231180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.231213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.231404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.231678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.231709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.232026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.232296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.232328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.232602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.232944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.232974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.233344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.233543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.233573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.233866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.234172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.234196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.234375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.234663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.234694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.235059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.235410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.235442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.235788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.236040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.236083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.236353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.236629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.236660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.237041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.237396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.237429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.237700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.238040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.238082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.238344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.238562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.238592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.238852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.239180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.239214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.239406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.239766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.239797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.240137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.240469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.240500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.240786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.241035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.241048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.241311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.241534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.241565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.241901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.242263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.242295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.242618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.242968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.242998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.243353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.243709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.243739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.244088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.244357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.244387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.244653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.244936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.244966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.245180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.245365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.245396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.245595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.245780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.245810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.246175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.246389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.246419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.127 [2024-04-24 10:28:38.246694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.247040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.127 [2024-04-24 10:28:38.247084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.127 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.247343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.247542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.247573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.247848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.248131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.248144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.248447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.248708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.248738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.249024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.249366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.249417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.249739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.250087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.250120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.250398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.250712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.250743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.251098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.251420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.251451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.251790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.252176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.252209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.252475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.252671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.252702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.253048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.253241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.253254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.253580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.253773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.253803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.254066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.254306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.254337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.254609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.254965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.254996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.255263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.255482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.255513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.255779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.256094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.256126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.256492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.256695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.256727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.257069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.257343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.257374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.257699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.257953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.257966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.258241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.258467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.258479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.258664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.259027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.259059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.259407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.259675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.259705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.260046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.260319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.260351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.260631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.260908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.260951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.261192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.261471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.261484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.261714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.262021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.262052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.262404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.262671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.262702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.263060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.263301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.263332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.263531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.263820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.128 [2024-04-24 10:28:38.263851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.128 qpair failed and we were unable to recover it. 00:33:25.128 [2024-04-24 10:28:38.264186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.264447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.264478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.264760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.265091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.265124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.265352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.265664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.265695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.265967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.266301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.266335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.266544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.266851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.266881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.267238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.267445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.267476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.267704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.267893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.267924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.268187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.268407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.268419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.268646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.268799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.268813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.269091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.269371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.269401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.269620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.269969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.270000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.270325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.270593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.270635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.270917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.271128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.271161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.271464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.271715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.271745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.272039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.272379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.272411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.272741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.273090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.273123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.273450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.273695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.273727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.274050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.274364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.274398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.274754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.275116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.275150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.275429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.275702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.275733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.276080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.276301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.276332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.276606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.276942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.276980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.277319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.277651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.277682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.278025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.278286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.278319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.129 qpair failed and we were unable to recover it. 00:33:25.129 [2024-04-24 10:28:38.278616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.129 [2024-04-24 10:28:38.278902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.278933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.279261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.279472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.279504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.279771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.280114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.280156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.280392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.280548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.280560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.280845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.281055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.281067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.281279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.281595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.281626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.281992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.282253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.282267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.282511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.282839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.282876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.283153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.283412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.283442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.283642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.283922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.283935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.284179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.284513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.284544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.284799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.285093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.285126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.285333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.285627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.285659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.285910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.286181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.286213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.286488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.286757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.286769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.287101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.287370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.287403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.287727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.287985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.288016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.288322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.288641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.288677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.289027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.289349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.289382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.289717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.289989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.290020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.290323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.290533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.290564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.290839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.291116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.291128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.291441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.291704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.291734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.292049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.292353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.292385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.292605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.292944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.292976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.293308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.293583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.293614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.294000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.294348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.294381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.294654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.294897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.294910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.295135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.295353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.130 [2024-04-24 10:28:38.295366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.130 qpair failed and we were unable to recover it. 00:33:25.130 [2024-04-24 10:28:38.295577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.295886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.295917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.296251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.296610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.296642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.296916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.297252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.297286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.297502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.297768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.297799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.298155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.298374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.298405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.298670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.298981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.299013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.299278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.299596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.299627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.299984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.300250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.300283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.300565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.300863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.300894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.301093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.301437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.301468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.301669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.301958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.301990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.302278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.302490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.302522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.302735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.303110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.303143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.303361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.303679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.303711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.304056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.304339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.304372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.304649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.305062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.305106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.305375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.305597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.305628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.305982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.306212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.306226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.306391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.306704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.306736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.306956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.307163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.307176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.307358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.307595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.307626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.307903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.308156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.308188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.308407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.309919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.309952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.310269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.310575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.310588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.310870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.311217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.311250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.311474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.311679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.311711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.312031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.312236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.312250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.312481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.312679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.312710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.131 [2024-04-24 10:28:38.313003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.313273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.131 [2024-04-24 10:28:38.313286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.131 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.313459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.313680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.313692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.313934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.314168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.314202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.314467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.314810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.314841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.315095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.315362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.315393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.315650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.315858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.315888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.316220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.316474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.316505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.316729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.317064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.317108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.317384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.317578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.317609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.317863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.318100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.318113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.318327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.318687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.318718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.319012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.319316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.319348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.319554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.319839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.319871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.320097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.320316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.320329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.320564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.320835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.320866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.321212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.321433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.321463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.321783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.322048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.322095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.322368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.322626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.322656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.322946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.323218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.323232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.323510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.323769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.323800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.324153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.324370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.324400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.324732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.325101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.325133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.325408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.325663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.325694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.325986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.326289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.326321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.326597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.326985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.327016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.327254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.327527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.327557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.327830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.328101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.328133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.328420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.328653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.328665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.328946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.329220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.329253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.132 qpair failed and we were unable to recover it. 00:33:25.132 [2024-04-24 10:28:38.330611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.330991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.132 [2024-04-24 10:28:38.331026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.331273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.331592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.331623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.331970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.332231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.332265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.332616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.332949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.332981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.333239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.333412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.333425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.333649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.334006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.334037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.334356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.334612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.334643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.334919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.335126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.335158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.335450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.335706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.335738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.335923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.336207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.336221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.336464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.336829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.336860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.337086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.337448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.337479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.337805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.338067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.338110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.338383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.338616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.338630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.338867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.339148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.339181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.339457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.339719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.339750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.340027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.340250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.340284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.340618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.340908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.340939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.341200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.341478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.341491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.341665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.341846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.341858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.342103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.342432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.342462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.342814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.343050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.343062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.343304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.343514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.343527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.343774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.343960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.343973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.344203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.344494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.344525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.133 qpair failed and we were unable to recover it. 00:33:25.133 [2024-04-24 10:28:38.344808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.133 [2024-04-24 10:28:38.345147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.345180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.345465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.345715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.345746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.346020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.346410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.346442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.346710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.346981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.347013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.347244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.347455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.347486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.347805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.348160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.348195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.348368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.348596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.348610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.348908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.349219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.349232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.349516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.349690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.349703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.349884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.350128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.350162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.350387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.350588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.350619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.350974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.351159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.351173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.351453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.351737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.351768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.352112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.352304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.352317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.352587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.352879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.352917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.353245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.353533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.353564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.353918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.354149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.354182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.354528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.354750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.354781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.355870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.356197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.356213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.356405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.356729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.356760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.357099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.357357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.357389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.357602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.357949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.357980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.358257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.358486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.358517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.358887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.359153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.359185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.359405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.359663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.359694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.359969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.360275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.360308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.360634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.360922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.360954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.361219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.361542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.361581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.361896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.362190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.134 [2024-04-24 10:28:38.362237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.134 qpair failed and we were unable to recover it. 00:33:25.134 [2024-04-24 10:28:38.362501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.362793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.362824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.363019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.363270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.363303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.364604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.364892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.364927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.365169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.365444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.365476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.365783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.366157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.366191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.366462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.366697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.366728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.367039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.367364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.367377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.367551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.367771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.367783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.368018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.368248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.368265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.368519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.368727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.368758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.369093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.369359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.369390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.369644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.369985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.370016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.370390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.370629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.370660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.370956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.371275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.371289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.371531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.371720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.371733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.372037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.372212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.372226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.372457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.372676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.372707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.373116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.373388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.373420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.373689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.373887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.373925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.374189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.374490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.374521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.374817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.375130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.375144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.375376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.375519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.375532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.375741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.375982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.376013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.376369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.376686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.376717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.376936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.377272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.377307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.377515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.377675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.377688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.377998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.378220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.378252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.378576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.378922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.378953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.379122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.379280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.135 [2024-04-24 10:28:38.379295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.135 qpair failed and we were unable to recover it. 00:33:25.135 [2024-04-24 10:28:38.379532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.379765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.379778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.380023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.380404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.380437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.380705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.380976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.380988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.381295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.381601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.381614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.381925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.382153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.382166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.382332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.382523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.382536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.382771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.382995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.383008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.383159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.383414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.383427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.383593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.383773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.383790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.384096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.384271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.384285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.384453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.384689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.384701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.384911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.385216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.385232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.385457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.385736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.385750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.386081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.386400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.386416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.386595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.386877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.386889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.387222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.387538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.387552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.387802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.388084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.388098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.388368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.388603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.136 [2024-04-24 10:28:38.388617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.136 qpair failed and we were unable to recover it. 00:33:25.136 [2024-04-24 10:28:38.388948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.389257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.389272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.389481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.389690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.389702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.390018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.390299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.390316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.390482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.390723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.390739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.390887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.391132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.391146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.391380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.391632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.391662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.392045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.392393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.392409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.392732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.392887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.392899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.393132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.393479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.393521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.393801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.394104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.394137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.394361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.394582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.394613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.394819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.395154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.395187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.395507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.395869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.395900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.396166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.396485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.396516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.396736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.397051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.397107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.397390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.397585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.397616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.397959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.398221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.398254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.406 qpair failed and we were unable to recover it. 00:33:25.406 [2024-04-24 10:28:38.398518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.398858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.406 [2024-04-24 10:28:38.398889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.399228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.399511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.399542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.399749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.400028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.400059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.400362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.400617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.400648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.400910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.401185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.401217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.401488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.401699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.401731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.401997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.402315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.402347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.402635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.402983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.403014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.403286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.403483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.403514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.403873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.404202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.404235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.404500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.404794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.404825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.405044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.405331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.405345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.405513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.405727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.405752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.405981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.406246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.406259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.406575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.406852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.406882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.407151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.407321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.407352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.407618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.407890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.407921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.408248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.408504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.408535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.408870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.409229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.409261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.409488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.409707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.409738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.410057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.410315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.410347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.410621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.410952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.410983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.411328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.411543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.411584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.411836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.412144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.412157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.412399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.412602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.412633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.412838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.413031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.413062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.413417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.413781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.413813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.414136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.414451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.414482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.414730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.415049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.407 [2024-04-24 10:28:38.415090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.407 qpair failed and we were unable to recover it. 00:33:25.407 [2024-04-24 10:28:38.415421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.415705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.415739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.415992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.416319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.416366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.416695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.416919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.416954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.417295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.417506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.417519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.417740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.417975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.417990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.418266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.418546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.418577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.418857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.419054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.419095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.419372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.419621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.419634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.419946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.420236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.420270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.420529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.420813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.420844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.421218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.421436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.421466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.421684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.421942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.421974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.422213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.422390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.422404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.422622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.422959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.422990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.423330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.423593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.423624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.423886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.424182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.424215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.424492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.424868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.424900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.425191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.425458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.425490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.425687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.425877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.425908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.426118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.426291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.426323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.426595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.426890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.426922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.427243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.427530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.427562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.427909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.428283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.428316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.428638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.428897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.428929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.429202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.429462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.429494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.429768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.430030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.430062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.430428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.430695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.430728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.431094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.431293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.431325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.408 qpair failed and we were unable to recover it. 00:33:25.408 [2024-04-24 10:28:38.431616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.408 [2024-04-24 10:28:38.431988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.432019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.432357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.432564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.432596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.432935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.433276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.433310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.433515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.433745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.433776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.434044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.434338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.434370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.434575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.434784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.434816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.435063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.435288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.435319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.435522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.435854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.435866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.436174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.436500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.436532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.436750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.437029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.437060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.437345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.437669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.437699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.438044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.438374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.438406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.438682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.439023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.439055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.439345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.439600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.439632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.439958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.440275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.440308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.440563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.440863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.440894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.441171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.441436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.441467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.441672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.442013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.442043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.442275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.442518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.442550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.442926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.443175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.443207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.443553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.443884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.443915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.444176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.444360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.444391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.444667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.444994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.445024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.445316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.445562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.445593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.445863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.446114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.446146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.446440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.446815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.446846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.447191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.447463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.447494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.447721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.448099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.409 [2024-04-24 10:28:38.448132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.409 qpair failed and we were unable to recover it. 00:33:25.409 [2024-04-24 10:28:38.448328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.448536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.448568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.448967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.449305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.449338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.449604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.449806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.449836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.450100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.450292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.450305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.450580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.450760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.450773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.451052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.451255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.451268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.451513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.451834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.451864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.452187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.452446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.452459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.452698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.452854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.452866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.453125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.453339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.453370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.453735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.453915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.453952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.454246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.454510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.454522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.454782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.454997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.455009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.455266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.455497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.455529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.455858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.456067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.456112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.456369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.456655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.456686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.457029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.457247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.457261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.457503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.457766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.457797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.458064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.458358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.458371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.458593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.458955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.458968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.459252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.459513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.459549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.459869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.460155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.460188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.460512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.460811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.460843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.461116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.461388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.461401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.461662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.461892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.461905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.410 qpair failed and we were unable to recover it. 00:33:25.410 [2024-04-24 10:28:38.462198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.462451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.410 [2024-04-24 10:28:38.462483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.462759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.463018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.463049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.463407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.463603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.463633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.463909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.464277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.464310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.464603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.464979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.465008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.465276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.465542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.465579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.465937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.466229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.466244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.466486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.466790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.466821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.467178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.467392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.467423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.467685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.468021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.468052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.468300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.468629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.468660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.469006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.469257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.469271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.469506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.469723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.469754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.470105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.470453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.470484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.470750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.471031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.471062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.471425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.471690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.471705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.471934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.472154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.472186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.472510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.472864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.472877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.473158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.473432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.473463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.473775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.474036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.474066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.474361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.474626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.474658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.474927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.475141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.475174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.475516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.475703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.475734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.476103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.476420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.476452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.476728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.476956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.476968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.477272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.477542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.477574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.477978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.478295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.478328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.411 qpair failed and we were unable to recover it. 00:33:25.411 [2024-04-24 10:28:38.478625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.411 [2024-04-24 10:28:38.478975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.479006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.479404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.479671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.479703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.480046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.480345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.480377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.480684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.480938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.480969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.482297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.482643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.482679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.482989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.483261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.483294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.483509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.483847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.483877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.484215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.484536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.484567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.484888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.485205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.485238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.485595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.485939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.485970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.486287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.486606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.486639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.486902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.487229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.487263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.487585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.487906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.487937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.488207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.488484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.488515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.488775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.489036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.489067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.489422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.489673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.489704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.489993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.490245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.490278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.490563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.490892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.490923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.491142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.491462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.491494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.491758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.492095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.492127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.492458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.492729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.492741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.492961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.493268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.493300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.493600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.493901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.493932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.494248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.494462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.494494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.494807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.495048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.495060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.495335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.495580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.495592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.495880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.496171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.496204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.496530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.496747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.496760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.496985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.497213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.412 [2024-04-24 10:28:38.497226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.412 qpair failed and we were unable to recover it. 00:33:25.412 [2024-04-24 10:28:38.497412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.497662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.497694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.498036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.498332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.498360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.498541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.498727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.498757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.499093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.499358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.499390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.499677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.500014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.500046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.500409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.500700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.500731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.501052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.501409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.501442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.501850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.502125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.502158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.502423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.502739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.502771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.503069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.503365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.503397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.503815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.504084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.504117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.504454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.504833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.504865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.505139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.505356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.505369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.505597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.505996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.506027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.506398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.506717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.506748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.507095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.507362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.507393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.507714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.507987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.508018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.508231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.508420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.508451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.508672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.509012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.509044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.509366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.509578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.509609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.510018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.510388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.510421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.510678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.511016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.511047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.511411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.511680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.511712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.512094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.512302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.512333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.512628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.512883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.512914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.513184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.513446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.513459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.513632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.513956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.513987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.514275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.514462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.514493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.413 qpair failed and we were unable to recover it. 00:33:25.413 [2024-04-24 10:28:38.514820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.515157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.413 [2024-04-24 10:28:38.515189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.515483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.515732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.515745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.515970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.516270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.516302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.516628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.516872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.516902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.517231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.517543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.517574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.517874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.518241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.518274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.518649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.518986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.519017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.519356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.519618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.519650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.519918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.520228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.520261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.520554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.520827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.520858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.521214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.521576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.521607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.521909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.522183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.522220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.522403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.522689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.522720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.522921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.523223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.523256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.523552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.523919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.523951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.524242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.524499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.524512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.524833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.525046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.525088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.525343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.525555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.525567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.525883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.526232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.526265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.526533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.526809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.526840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.527179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.527378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.527409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.527682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.528012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.528044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.528400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.528620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.528652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.528914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.529183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.529216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.529518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.529924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.529955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.530332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.530538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.530570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.530826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.531134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.531167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.531413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.531779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.531810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.532059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.532405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.532417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.414 qpair failed and we were unable to recover it. 00:33:25.414 [2024-04-24 10:28:38.532611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.414 [2024-04-24 10:28:38.532878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.532909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.533163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.533428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.533462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.533731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.534024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.534055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.534292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.534649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.534680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.534962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.535249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.535283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.535495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.535695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.535727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.536080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.536391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.536422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.536689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.536969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.537001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.537276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.537593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.537624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.537984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.538324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.538337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.538575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.538868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.538898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.539237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.539494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.539525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.539838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.540143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.540176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.540477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.540733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.540764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.541038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.541411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.541443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.541793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.542116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.542148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.542419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.542685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.542716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.543053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.543351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.543383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.543643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.544001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.544033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.544367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.544627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.544658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.545015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.545356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.545389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.545659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.545914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.545945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.546219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.546603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.546635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.547016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.547312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.547344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.415 [2024-04-24 10:28:38.547614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.548001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.415 [2024-04-24 10:28:38.548032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.415 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.548386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.548650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.548663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.548966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.549257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.549289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.549559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.549831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.549863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.550121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.550396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.550426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.550707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.551043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.551090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.551463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.551709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.551740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.552084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.552281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.552313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.552595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.552953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.552984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.553294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.553658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.553695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.554026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.554363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.554395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.554745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.554893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.554904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.555111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.555419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.555450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.555814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.556161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.556193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.556407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.556688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.556718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.557042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.557304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.557337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.557603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.557905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.557936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.558191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.558470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.558500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.558865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.559145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.559178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.559496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.559748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.559785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.560060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.560328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.560360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.560715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.561060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.561106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.561386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.561582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.561612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.561893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.562139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.562153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.562426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.562742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.562772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.563115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.563439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.563470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.563739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.563923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.563954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.564201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.564485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.564516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.564836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.565128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.565161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.416 [2024-04-24 10:28:38.565393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.565577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.416 [2024-04-24 10:28:38.565614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.416 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.565905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.566270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.566303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.566876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.567225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.567241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.567470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.567710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.567724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.567884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.568126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.568140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.568367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.568614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.568628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.568977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.569149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.569163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.569413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.569696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.569709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.569951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.570242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.570255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.570484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.570831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.570844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.571104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.571344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.571360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.571643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.571857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.571870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.572162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.572319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.572332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.572545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.572757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.572770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.572990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.573221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.573234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.573396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.573697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.573710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.573926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.574205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.574219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.574525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.574701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.574713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.574937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.575233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.575246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.575480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.575633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.575646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.575967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.576260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.576273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.576543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.576737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.576750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.576892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.577152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.577164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.577376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.577674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.577705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.578022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.578302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.578334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.578622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.578975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.579006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.579349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.579621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.579653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.579844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.580095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.580127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.580348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.580567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.580598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.417 qpair failed and we were unable to recover it. 00:33:25.417 [2024-04-24 10:28:38.580893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.581156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.417 [2024-04-24 10:28:38.581189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.581443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.581791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.581822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.582027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.582310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.582342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.582695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.582944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.582974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.583262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.583474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.583506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.583695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.584005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.584017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.584243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.584503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.584533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.584817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.585019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.585049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.585320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.585599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.585629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.585893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.586259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.586292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.586540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.586709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.586740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.587004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.587268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.587301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.587559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.587858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.587871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.588132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.588388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.588419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.588739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.589054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.589095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.589374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.589716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.589729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.590100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.590352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.590383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.590720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.591059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.591103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.591372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.591581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.591594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.591878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.592193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.592226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.592577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.592898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.592929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.593191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.593550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.593581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.593792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.593956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.593968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.594242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.594412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.594443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.594796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.595050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.595091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.595357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.595698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.595729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.596011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.596227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.596241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.596419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.596643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.596673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.597015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.597291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.597324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.418 qpair failed and we were unable to recover it. 00:33:25.418 [2024-04-24 10:28:38.597668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.418 [2024-04-24 10:28:38.597825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.597855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.598054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.598318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.598350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.599164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.599476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.599488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.599797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.600106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.600120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.600357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.600639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.600651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.600892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.601108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.601121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.601359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.601709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.601740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.602088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.602365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.602397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.602673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.603015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.603045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.603400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.603659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.603689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.604031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.604302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.604335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.604693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.605035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.605066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.605294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.605507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.605538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.605817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.606088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.606121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.606469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.606825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.606837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.607159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.607497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.607528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.607859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.608142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.608175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.608523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.608782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.608795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.609102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.609442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.609474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.609823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.610066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.610131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.610454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.610655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.610687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.611000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.611320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.611353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.611707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.612030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.612061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.612338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.612625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.612656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.612970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.613214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.613226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.613499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.613786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.613817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.614127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.614408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.614420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.614653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.614955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.614967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.615258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.615544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.615574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.419 qpair failed and we were unable to recover it. 00:33:25.419 [2024-04-24 10:28:38.615872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.419 [2024-04-24 10:28:38.616129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.616142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.616309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.616629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.616642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.616861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.617208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.617240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.617531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.617861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.617873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.618047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.618204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.618217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.618516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.618767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.618797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.619100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.619384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.619414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.619619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.619857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.619888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.620245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.620499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.620512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.620735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.621068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.621109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.621311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.621570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.621600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.621862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.622105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.622138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.622482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.622729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.622761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.623030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.623385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.623418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.623763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.624062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.624105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.624375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.624641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.624671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.624936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.625267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.625300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.625520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.625819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.625851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.626163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.626465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.626496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.626745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.627020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.627050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.627407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.627726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.627756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.628106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.628430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.628461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.628802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.629144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.629157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.629464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.629707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.629738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.420 qpair failed and we were unable to recover it. 00:33:25.420 [2024-04-24 10:28:38.630016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.630314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.420 [2024-04-24 10:28:38.630347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.630711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.631053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.631093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.631419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.631759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.631790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.632133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.632403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.632434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.632782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.633111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.633144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.633486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.633817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.633848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.634172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.634521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.634553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.634877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.635229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.635262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.635613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.635934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.635966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.636342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.636678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.636710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.637045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.637409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.637442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.637790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.638057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.638113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.638482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.638815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.638845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.639170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.639438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.639470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.639738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.640090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.640122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.640389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.640590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.640621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.640969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.641243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.641276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.641533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.641806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.641837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.642130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.642448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.642479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.642826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.643155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.643188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.643534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.643786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.643823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.644114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.644475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.644506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.644853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.645137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.645169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.645429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.645763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.645794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.646002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.646281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.646313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.646658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.646992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.647023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.647253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.647594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.647625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.647911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.648251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.648283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.648615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.648959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.648990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.421 qpair failed and we were unable to recover it. 00:33:25.421 [2024-04-24 10:28:38.649335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.421 [2024-04-24 10:28:38.649645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.649675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.650027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.650377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.650414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.650736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.651000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.651031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.651387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.651704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.651735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.652058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.652330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.652363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.652713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.652916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.652947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.653278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.653597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.653628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.653984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.654308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.654342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.654636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.654954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.654986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.655247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.655557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.655588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.655912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.656262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.656295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.656638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.656969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.657005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.657291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.657550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.657581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.657913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.658078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.658091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.658401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.658668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.658700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.658965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.659228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.659260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.659529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.659725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.659756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.660120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.660468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.660499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.660827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.661155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.661169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.661420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.661722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.661735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.661886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.662130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.662170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.662434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.662723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.662767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.663087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.663355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.663386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.663732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.664059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.664100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.664290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.664652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.664682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.665008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.665359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.665391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.665651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.666020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.666051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.666426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.666710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.666741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.667008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.667345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.667377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.422 qpair failed and we were unable to recover it. 00:33:25.422 [2024-04-24 10:28:38.667711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.422 [2024-04-24 10:28:38.667970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.668000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.668276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.668518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.668530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.668758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.669103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.669152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.669460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.669719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.669751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.669999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.670231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.670246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.670492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.670679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.670711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.670976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.671315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.671348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.671683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.672025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.672040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.672335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.672633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.672647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.672955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.673203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.673237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.673495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.673761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.673797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.423 [2024-04-24 10:28:38.674137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.674490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.423 [2024-04-24 10:28:38.674522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.423 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.674784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.674998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.675011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.675323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.675633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.675657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.675963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.676284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.676299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.676607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.676912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.676924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.677143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.677412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.677443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.677737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.678065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.678091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.678319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.678581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.678612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.678891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.679233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.679271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.679492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.679841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.679872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.680219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.680411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.680442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.680779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.681134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.681167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.681518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.681835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.681865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.682066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.682450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.682482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.682737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.683068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.683110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.683408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.683669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.683699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.683894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.684175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.684208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.684509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.684755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.684785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.685132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.685344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.685356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.685648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.685986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.686017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.686361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.686697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.686728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.687084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.687411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.687442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.687792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.688120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.688152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.688442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.688781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.688811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.694 qpair failed and we were unable to recover it. 00:33:25.694 [2024-04-24 10:28:38.689092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.694 [2024-04-24 10:28:38.689379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.689410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.689780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.690120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.690153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.690421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.690762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.690792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.691117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.691358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.691389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.691725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.692085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.692117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.692378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.692717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.692749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.693015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.693290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.693323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.693599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.693885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.693915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.694288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.694634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.694666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.694961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.695325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.695358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.695681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.695950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.695982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.696250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.696473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.696504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.696856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.697193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.697225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.697565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.697903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.697933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.698266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.698552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.698583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.698909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.699100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.699132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.699453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.699748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.699780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.700059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.700436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.700467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.700799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.701060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.701100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.701358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.701635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.701647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.701969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.702175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.702221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.702545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.702897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.702936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.703239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.703544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.703559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.703789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.704130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.704143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.704369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.704679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.704713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.704987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.705334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.705367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.705711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.706044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.706089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.706415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.706752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.706783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.706993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.707374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.695 [2024-04-24 10:28:38.707406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.695 qpair failed and we were unable to recover it. 00:33:25.695 [2024-04-24 10:28:38.707683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.707945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.707958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.708202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.708425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.708437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.708680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.708861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.708874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.709132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.709501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.709532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.709877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.710138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.710171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.710516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.710840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.710871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.711135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.711478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.711509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.711836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.712185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.712217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.712544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.712824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.712855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.713194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.713531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.713562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.713829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.714175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.714207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.714520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.714770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.714801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.715143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.715471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.715503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.715789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.716131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.716163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.716494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.716708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.716738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.717091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.717411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.717442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.717733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.717926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.717957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.718198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.718537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.718568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.718827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.719096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.719128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.719473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.719795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.719834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.720126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.720497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.720529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.720811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.721157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.721190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.721387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.721701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.721731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.722078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.722408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.722438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.722697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.723059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.723102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.723428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.723768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.723799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.724083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.724343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.724374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.724643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.724978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.725009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.696 [2024-04-24 10:28:38.725273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.725610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.696 [2024-04-24 10:28:38.725641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.696 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.725905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.726179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.726212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.726470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.726723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.726754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.727009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.727369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.727401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.727671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.727954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.727999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.728302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.728552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.728565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.728825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.729140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.729186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.729415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.729592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.729604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.729911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.730152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.730165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.730477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.730792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.730823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.731079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.731423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.731454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.731725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.732093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.732125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.732449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.732783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.732814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.733158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.733405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.733437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.733788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.734048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.734088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.734439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.734758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.734789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.735084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.735445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.735476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.735823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.736152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.736184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.736454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.736706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.736737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.737108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.737309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.737340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.737686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.737948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.737979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.738374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.738679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.738711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.739036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.739337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.739369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.739694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.739975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.740007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.740285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.740602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.740634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.740958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.741246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.741279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.741655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.741932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.741963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.742307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.742562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.742592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.742915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.743249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.743281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.697 [2024-04-24 10:28:38.743584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.743852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.697 [2024-04-24 10:28:38.743883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.697 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.744232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.744525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.744556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.744829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.745096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.745129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.745434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.745804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.745834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.746098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.746264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.746277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.746583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.746850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.746880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.747238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.747542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.747573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.747926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.748187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.748219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.748487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.748647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.748660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.748942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.749194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.749227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.749595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.749857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.749888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.750236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.750496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.750527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.750888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.751204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.751244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.751598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.751866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.751897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.752099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.752368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.752400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.752619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.752906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.752951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.753191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.753487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.753517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.753743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.754010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.754041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.754431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.754700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.754732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.754989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.755312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.755346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.698 qpair failed and we were unable to recover it. 00:33:25.698 [2024-04-24 10:28:38.755620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.698 [2024-04-24 10:28:38.755909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.755941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.756217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.756485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.756516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.756822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.757521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.757553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.757938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.758223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.758239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.758466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.758633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.758645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.758886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.759114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.759127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.759371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.759625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.759638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.759865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.760114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.760127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.760295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.760550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.760562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.760772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.761060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.761078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.761316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.761622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.761636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.761877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.762097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.762111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.762416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.762584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.762601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.762856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.763009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.763021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.763331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.763587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.763600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.763907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.764132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.764145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.764368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.764670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.764682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.764917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.765256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.765269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.765498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.765704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.765716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.766019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.766247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.766261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.766597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.766809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.766821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.767126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.767384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.767396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.767713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.768017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.768030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.768332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.768558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.768570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.768787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.769013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.769025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.769258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.769483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.769496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.769679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.769989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.770002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.770210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.770490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.770503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.770830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.771012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.699 [2024-04-24 10:28:38.771025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.699 qpair failed and we were unable to recover it. 00:33:25.699 [2024-04-24 10:28:38.771258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.771530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.771543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.771869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.772171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.772184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.772494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.772673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.772686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.772848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.773147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.773162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.773402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.773696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.773708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.773991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.774280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.774294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.774524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.774750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.774763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.774985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.775202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.775217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.775491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.775825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.775837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.776149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.776394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.776407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.776733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.777033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.777046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.777314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.777591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.777604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.777763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.778045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.778057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.778355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.778502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.778514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.778750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.778962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.778974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.779229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.779389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.779401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.779648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.779960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.779973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.780141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.780387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.780399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.780614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.780841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.780854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.781214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.781511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.781522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.781823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.782107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.782120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.782345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.782576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.782588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.782760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.782999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.783012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.783178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.783457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.783469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.783766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.784044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.784057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.784412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.784653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.784666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.784848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.785092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.785105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.785322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.785492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.785505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.700 qpair failed and we were unable to recover it. 00:33:25.700 [2024-04-24 10:28:38.785801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.700 [2024-04-24 10:28:38.786095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.786108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.786327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.786497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.786509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.786669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.786966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.786978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.787271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.787493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.787504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.787726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.787963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.787975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.788263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.788561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.788574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.788869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.789094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.789107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.789260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.789470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.789482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.789755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.789997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.790008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.790309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.790583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.790595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.790827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.790997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.791009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.791160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.791434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.791446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.791764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.791986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.791999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.792208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.792412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.792424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.792669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.792945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.792957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.793179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.793395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.793407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.793629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.793840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.793852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.794164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.794338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.794350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.794626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.794849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.794861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.795101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.795316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.795329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.795630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.795931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.795943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.796252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.796572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.796584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.796739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.797027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.797040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.797333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.797580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.797592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.797815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.798060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.798078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.798327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.798652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.798664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.798936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.799242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.799255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.799550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.799766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.799778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.800076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.800285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.701 [2024-04-24 10:28:38.800297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.701 qpair failed and we were unable to recover it. 00:33:25.701 [2024-04-24 10:28:38.800539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.800742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.800754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.800920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.801135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.801168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.801454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.801648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.801678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.801944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.802192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.802224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.802553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.802797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.802810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.803133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.803358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.803389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.803591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.803947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.803977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.804322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.804649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.804679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.805016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.805237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.805249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.805528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.805747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.805777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.806032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.806288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.806301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.806453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.806728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.806759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.807049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.807299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.807311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.807540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.807819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.807849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.808188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.808454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.808485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.808690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.808977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.809008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.809289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.809625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.809656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.809854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.810099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.810132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.810394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.810564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.810576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.810736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.810883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.810895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.811210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.811419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.811431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.811726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.812021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.812033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.812272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.812428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.812458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.812713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.812992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.813023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.813364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.813596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.813609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.813766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.813981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.813993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.814232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.814501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.814532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.814776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.814984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.815014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.702 [2024-04-24 10:28:38.815335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.815527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.702 [2024-04-24 10:28:38.815557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.702 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.815888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.816157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.816189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.816455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.816662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.816693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.816948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.817146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.817178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.817429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.817641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.817670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.817917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.818173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.818205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.818409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.818607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.818637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.818839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.819043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.819097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.819350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.819553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.819565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.819742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.819994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.820024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.820356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.820623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.820655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.703 qpair failed and we were unable to recover it. 00:33:25.703 [2024-04-24 10:28:38.820849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.821103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.703 [2024-04-24 10:28:38.821135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.821396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.821619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.821650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.821859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.822102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.822134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.822327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.822560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.822590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.822853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.823192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.823204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.823501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.823718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.823730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.823892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.824111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.824123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.824293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.824456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.824468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.824693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.824972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.824984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.825206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.825427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.825439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.825682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.825827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.825840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.826017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.826155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.826168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.826322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.826502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.826514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.826719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.826949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.826961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.827116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.827277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.827290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.827498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.827715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.827727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.827887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.828090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.828103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.828255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.828467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.828480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.828700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.828843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.828856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.704 qpair failed and we were unable to recover it. 00:33:25.704 [2024-04-24 10:28:38.829083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.704 [2024-04-24 10:28:38.829300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.829331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.829658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.829904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.829935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.830169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.830297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.830309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.830440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.830733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.830765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.831105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.831342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.831354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.831516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.831791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.831821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.832018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.832287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.832319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.832584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.832833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.832862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.833059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.833331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.833362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.833670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.833999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.834035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.834283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.834605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.834616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.834855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.835048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.835101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.835366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.835617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.835647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.835909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.836189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.836202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.836423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.836583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.836595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.836741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.837051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.837092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.837277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.837600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.837629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.837851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.838106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.838138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.838331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.838583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.838613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.838893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.839143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.839181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.839382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.839629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.839659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.839907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.840216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.840248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.840441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.840631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.840661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.840916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.841169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.841201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.841454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.841713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.841743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.842054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.842344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.842382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.842696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.842856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.842886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.843252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.843345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.843357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.843497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.843662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.705 [2024-04-24 10:28:38.843675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.705 qpair failed and we were unable to recover it. 00:33:25.705 [2024-04-24 10:28:38.843825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.844029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.844043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.844248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.844417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.844428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.844662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.844853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.844884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.845145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.845466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.845496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.845753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.845930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.845960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.846133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.846410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.846440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.846649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.846835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.846866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.847220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.847432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.847444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.847653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.847964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.847994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.848248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.848478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.848490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.848758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.848855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.848885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.849097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.849299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.849330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.849646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.849993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.850023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.850224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.850410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.850440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.850807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.851003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.851033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.851384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.851651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.851681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.852006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.852195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.852226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.852465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.852600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.852612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.852863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.853108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.853140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.853387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.853703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.853733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.854062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.854403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.854415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.854652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.854895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.854925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.855128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.855338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.855368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.855682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.855872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.855902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.856097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.856419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.856431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.856593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.856748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.856778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.857035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.857216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.857228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.857445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.857744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.857774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.706 [2024-04-24 10:28:38.858045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.858296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.706 [2024-04-24 10:28:38.858309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.706 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.858515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.858673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.858685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.858837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.859003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.859014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.859239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.859455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.859467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.859695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.859849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.859879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.860152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.860333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.860368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.860575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.860874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.860885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.861106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.861308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.861320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.861529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.861692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.861704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.861917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.862125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.862137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.862379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.862598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.862629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.862902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.863159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.863190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.863437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.863562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.863574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.863835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.864092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.864124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.864325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.864533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.864563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.864826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.865012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.865041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.865247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.865464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.865494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.865741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.865977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.866007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.866195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.866470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.866501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.866834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.867030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.867061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.867337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.867537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.867566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.867803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.867969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.867980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.868157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.868379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.868390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.868637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.868852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.868863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.869059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.869357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.869369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.869595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.869951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.869981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.870224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.870407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.870436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.870750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.870937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.870967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.871216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.871536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.871566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.707 qpair failed and we were unable to recover it. 00:33:25.707 [2024-04-24 10:28:38.871809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.871999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.707 [2024-04-24 10:28:38.872029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.872221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.872442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.872478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.872702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.872993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.873023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.873309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.873506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.873536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.873780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.874025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.874055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.874334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.874627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.874657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.874848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.875048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.875097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.875316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.875529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.875558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.875881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.876181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.876212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.876409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.876587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.876598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.876916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.877173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.877204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.877380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.877529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.877571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.877769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.878024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.878054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.878250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.878445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.878474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.878805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.878995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.879025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.879307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.879547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.879577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.879823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.879995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.880025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.880349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.880535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.880565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.880835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.881090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.881120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.881362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.881596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.881627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.881870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.882214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.882226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.882442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.882613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.882642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.882837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.883092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.883124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.883371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.883567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.883596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.883863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.884106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.708 [2024-04-24 10:28:38.884138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.708 qpair failed and we were unable to recover it. 00:33:25.708 [2024-04-24 10:28:38.884322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.884568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.884604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.884889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.884991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.885002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.885204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.885458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.885488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.885796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.886114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.886145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.886397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.886698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.886728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.887037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.887259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.887271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.887609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.887909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.887939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.888206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.888453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.888483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.888793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.889051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.889090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.889360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.889672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.889702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.889905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.890206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.890237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.890571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.890772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.890802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.891046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.891292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.891303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.891501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.891659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.891689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.891939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.892079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.892110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.892432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.892665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.892694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.892878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.893118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.893149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.893392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.893555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.893584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.893889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.894088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.894120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.894361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.894625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.894656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.894981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.895213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.895245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.895499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.895750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.895779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.896049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.896403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.896434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.896714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.896863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.896892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.897180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.897427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.897457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.897633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.897829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.897840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.898117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.898327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.898339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.898629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.898800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.898829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.709 [2024-04-24 10:28:38.899161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.899430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.709 [2024-04-24 10:28:38.899460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.709 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.899711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.899965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.899995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.900188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.900414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.900425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.900630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.900840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.900851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.901060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.901271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.901283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.901506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.901795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.901825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.902106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.902289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.902320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.902586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.902834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.902863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.903115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.903298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.903328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.903609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.903906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.903936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.904190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.904432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.904462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.904717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.904906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.904942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.905160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.905424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.905436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.905580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.905848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.905877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.906186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.906427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.906456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.906675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.906925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.906954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.907150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.907332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.907362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.907646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.907950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.907979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.908249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.908427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.908457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.908651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.908970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.908999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.909205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.909436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.909447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.909745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.909932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.909967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.910283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.910616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.910645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.910844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.911165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.911196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.911453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.911670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.911700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.912036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.912245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.912275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.912518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.912800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.912830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.913135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.913393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.913422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.913733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.913997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.914026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.710 qpair failed and we were unable to recover it. 00:33:25.710 [2024-04-24 10:28:38.914369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.914698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.710 [2024-04-24 10:28:38.914732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.915049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.915392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.915423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.915702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.915848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.915882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.916192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.916383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.916414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.916659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.916980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.917009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.917203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.917509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.917538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.917778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.917967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.917996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.918242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.918568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.918597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.918847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.919033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.919063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.919380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.919614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.919644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.919888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.920222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.920253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.920599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.920780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.920809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.921116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.921484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.921519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.921723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.921878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.921915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.922166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.922490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.922520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.922782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.923111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.923143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.923402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.923642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.923671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.923909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.924257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.924289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.924506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.924719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.924748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.924942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.925265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.925295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.925493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.925715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.925744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.926000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.926236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.926267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.926459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.926605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.926634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.926948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.927207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.927238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.927488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.927724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.927753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.927994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.928143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.928155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.928310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.928461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.928490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.928655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.928978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.929008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.929194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.929527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.929557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.711 qpair failed and we were unable to recover it. 00:33:25.711 [2024-04-24 10:28:38.929809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.711 [2024-04-24 10:28:38.930056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.930094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.930413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.930758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.930787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.931006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.931201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.931232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.931475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.931680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.931709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.932050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.932307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.932338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.932521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.932631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.932642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.932921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.933185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.933216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.933490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.933686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.933716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.934046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.934302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.934333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.934537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.934837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.934866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.935179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.935448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.935477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.935731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.936007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.936037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.936374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.936650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.936679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.936864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.937137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.937169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.937417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.937767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.937797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.938107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.938430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.938460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.938789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.939022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.939052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.939384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.939652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.939675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.939916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.940069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.940109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.940364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.940663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.940693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.940873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.941105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.941137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.941388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.941730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.941760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.942012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.942268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.942299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.942527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.942741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.942771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.943044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.943314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.943344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.712 [2024-04-24 10:28:38.943622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.943924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.712 [2024-04-24 10:28:38.943953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.712 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.944206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.944455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.944484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.944725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.944961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.944991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.945296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.945614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.945644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.945914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.946215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.946246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.946430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.946625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.946654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.946970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.947245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.947276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.947469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.947781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.947810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.948141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.948392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.948422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.948566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.948879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.948909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.949103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.949292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.949323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.949563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.949714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.949744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.949989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.950162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.950193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.950444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.950693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.950723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.951051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.951256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.951266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.951496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.951653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.951664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.951908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.952156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.952187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.952446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.952697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.952727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.952930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.953165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.953197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.953451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.953700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.953730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.953981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.954214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.954245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.954506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.954773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.954802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.955087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.955362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.955392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.955572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.955808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.713 [2024-04-24 10:28:38.955846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.713 qpair failed and we were unable to recover it. 00:33:25.713 [2024-04-24 10:28:38.956179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.956428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.956439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.956689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.956910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.956922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.957127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.957366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.957399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.957592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.957833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.957863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.958131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.958340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.958353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.958565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.958788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.958799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.959077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.959316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.959326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.959524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.959740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.959752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.959971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.960216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.960227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.960486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.960779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.960812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.714 [2024-04-24 10:28:38.961100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.961247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.714 [2024-04-24 10:28:38.961259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.714 qpair failed and we were unable to recover it. 00:33:25.986 [2024-04-24 10:28:38.961472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.986 [2024-04-24 10:28:38.961738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.986 [2024-04-24 10:28:38.961749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.986 qpair failed and we were unable to recover it. 00:33:25.986 [2024-04-24 10:28:38.961948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.986 [2024-04-24 10:28:38.962186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.986 [2024-04-24 10:28:38.962206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.986 qpair failed and we were unable to recover it. 00:33:25.986 [2024-04-24 10:28:38.962402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.986 [2024-04-24 10:28:38.962616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.986 [2024-04-24 10:28:38.962627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.986 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.962780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.962991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.963004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.963231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.963446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.963458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.963655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.963920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.963931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.964087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.964355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.964384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.964689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.964831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.964842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.965127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.965350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.965361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.965577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.965821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.965851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.966108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.966411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.966440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.966644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.966913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.966942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.967193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.967445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.967475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.967717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.967954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.967983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.968265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.968458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.968488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.968772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.969102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.969133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.969412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.969647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.969676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.969927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.970158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.970190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.970521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.970867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.970897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.971154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.971391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.971421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.971750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.972053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.972092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.972374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.972701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.972731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.972937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.973254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.973286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.973541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.973839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.973869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.974088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.974337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.974367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.974695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.974941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.974970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.975152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.975405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.975435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.975697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.975923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.975934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.976150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.976367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.976379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.976533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.976822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.976851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.977170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.977502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.977531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.987 qpair failed and we were unable to recover it. 00:33:25.987 [2024-04-24 10:28:38.977786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.987 [2024-04-24 10:28:38.978088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.978120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.978363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.978553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.978582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.978843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.979030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.979060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.979350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.979658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.979688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.979894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.980143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.980174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.980423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.980691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.980720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.980919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.981195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.981227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.981493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.981659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.981689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.982032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.982283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.982313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.982619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.982846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.982857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.983147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.983467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.983496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.983775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.984027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.984057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.984373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.984671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.984701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.984943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.985228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.985260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.985597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.985766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.985795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.986013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.986255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.986286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.986521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.986671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.986701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.987039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.987266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.987298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.987440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.987686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.987715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.987953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.988203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.988234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.988414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.988760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.988790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.989046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.989333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.989365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.989718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.989913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.989943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.990277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.990530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.990564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.990818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.991111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.991142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.991386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.991621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.991650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.991844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.992023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.992053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.992325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.992505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.992516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.992732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.992967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.988 [2024-04-24 10:28:38.992996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.988 qpair failed and we were unable to recover it. 00:33:25.988 [2024-04-24 10:28:38.993240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.993487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.993517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.993850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.994127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.994158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.994408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.994669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.994681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.994896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.995145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.995157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.995432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.995581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.995606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.995898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.996086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.996117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.996365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.996673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.996702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.996919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.997151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.997183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.997456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.997760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.997790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.998041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.998350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.998392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.998606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.998905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.998935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.999214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.999450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.999479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:38.999741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:38.999983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.000012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.000347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.000529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.000558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.000756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.000954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.000988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.001318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.001623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.001653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.001893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.002143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.002174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.002415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.002544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.002573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.002878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.003181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.003212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.003554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.003783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.003813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.004061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.004377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.004414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.004626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.004772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.004801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.004982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.005302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.005333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.005532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.005771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.005807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.006094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.006393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.006428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.006666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.006919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.006948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.007153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.007399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.007429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.007588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.007868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.007878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.989 qpair failed and we were unable to recover it. 00:33:25.989 [2024-04-24 10:28:39.008025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.989 [2024-04-24 10:28:39.008315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.008346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.008653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.008903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.008933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.009186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.009373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.009403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.009648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.009845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.009874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.010204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.010531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.010561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.010813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.011056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.011067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.011366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.011650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.011679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.011954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.012255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.012287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.012554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.012792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.012822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.013082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.013383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.013424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.013581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.013800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.013829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.014155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.014403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.014433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.014692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.014942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.014972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.015288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.015476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.015506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.015686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.015951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.015980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.016172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.016452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.016482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.016789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.016980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.016991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.017264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.017470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.017481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.017694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.017957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.017987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.018258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.018509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.018539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.018837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.019137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.019169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.019489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.019823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.019853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.020120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.020354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.020383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.020592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.020937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.020967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.021227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.021411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.021441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.021685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.021830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.021841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.021989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.022133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.022145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.022279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.022575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.022586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.022808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.022983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.023012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.990 qpair failed and we were unable to recover it. 00:33:25.990 [2024-04-24 10:28:39.023221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.990 [2024-04-24 10:28:39.023429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.023458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.023768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.024010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.024039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.024312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.024635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.024664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.024912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.025109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.025140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.025331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.025651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.025681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.026012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.026336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.026367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.026553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.026837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.026866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.027112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.027276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.027306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.027575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.027767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.027796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.028041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.028286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.028317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.028549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.028820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.028856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.029141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.029412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.029423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.029636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.029917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.029946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.030195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.030441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.030471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.030749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.031043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.031095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.031355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.031474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.031503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.031753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.031981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.032011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.032259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.032445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.032474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.032669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.032942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.032972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.033240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.033544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.033574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.033856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.034186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.034218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.034396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.034607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.034637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.034887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.035148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.035158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.991 qpair failed and we were unable to recover it. 00:33:25.991 [2024-04-24 10:28:39.035367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.991 [2024-04-24 10:28:39.035581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.035592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.035759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.036001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.036030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.036298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.036551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.036579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.036746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.037025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.037054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.037372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.037608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.037638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.037951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.038148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.038181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.038384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.038535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.038564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.038812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.039153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.039184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.039453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.039635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.039664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.039912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.040138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.040150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.040415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.040610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.040622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.040797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.040953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.040984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.041170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.041419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.041449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.041626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.041834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.041864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.042096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.042394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.042424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.042705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.043029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.043059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.043340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.043510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.043540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.043872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.044119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.044151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.044354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.044624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.044653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.044844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.045015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.045044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.045380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.045581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.045610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.045827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.046100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.046132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.046382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.046632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.046662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.046908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.047162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.047194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.047450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.047721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.047750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.047999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.048252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.048263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.048481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.048719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.048748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.048937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.049235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.049266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.049596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.049914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.049945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.992 qpair failed and we were unable to recover it. 00:33:25.992 [2024-04-24 10:28:39.050259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.992 [2024-04-24 10:28:39.050512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.050542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.050813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.051002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.051032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.051352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.051469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.051500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.051828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.052018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.052048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.052263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.052503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.052539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.052771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.052921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.052931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.053080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.053258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.053288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.053499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.053828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.053857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.054052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.054390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.054420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.054671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.054798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.054827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.055120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.055263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.055272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.055560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.055768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.055806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.056109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.056320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.056350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.056555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.056741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.056771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.057022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.057178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.057208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.057480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.057669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.057698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.057897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.058200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.058231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.058490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.058753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.058783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.059085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.059224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.059236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.059444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.059636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.059666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.059909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.060155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.060186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.060429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.060606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.060636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.060817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.061022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.061033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.061182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.061329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.061339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.061494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.061739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.061750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.061897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.062164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.062176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.062335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.062628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.062658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.062945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.063129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.063152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.063351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.063576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.063605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.993 qpair failed and we were unable to recover it. 00:33:25.993 [2024-04-24 10:28:39.063868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.993 [2024-04-24 10:28:39.064157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.064188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.064443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.064676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.064706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.065013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.065323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.065354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.065571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.065820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.065849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.066055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.066404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.066434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.066710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.066960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.066990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.067230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.067356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.067385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.067572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.067814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.067825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.068112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.068323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.068335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.068546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.068840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.068869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.069122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.069321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.069351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.069653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.069951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.069988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.070160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.070433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.070463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.070670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.070850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.070880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.071139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.071475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.071504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.071763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.072019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.072049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.072343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.072605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.072644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.072858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.073079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.073092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.073307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.073468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.073498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.073802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.073952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.073982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.074310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.074577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.074613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.074890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.075141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.075172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.075442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.075745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.075775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.075964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.076221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.076252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.076448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.076709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.076738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.076942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.077241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.077272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.077576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.077760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.077790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.078099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.078423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.078458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.078657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.078928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.994 [2024-04-24 10:28:39.078958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.994 qpair failed and we were unable to recover it. 00:33:25.994 [2024-04-24 10:28:39.079281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.079583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.079613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.079864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.080054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.080065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.080312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.080548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.080578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.080844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.081112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.081142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.081493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.081684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.081714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.081922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.082221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.082252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.082559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.082804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.082834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.083087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.083321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.083351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.083600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.083922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.083957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.084213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.084378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.084390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.084672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.084897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.084926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.085117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.085357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.085387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.085579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.085830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.085860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.086097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.086368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.086398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.086663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.086861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.086890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.087084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.087319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.087351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.087609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.087913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.087942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.088185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.088431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.088461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.088740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.088990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.089025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.089196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.089441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.089471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.089649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.089911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.089922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.090160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.090375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.090386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.090543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.090773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.090803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.091046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.091242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.091271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.091473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.091642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.091672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.091860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.092109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.092140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.092343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.092596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.092625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.995 qpair failed and we were unable to recover it. 00:33:25.995 [2024-04-24 10:28:39.092806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.092936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.995 [2024-04-24 10:28:39.092966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.093142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.093423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.093453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.093715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.094035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.094065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.094402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.094648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.094678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.094935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.095135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.095166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.095423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.095685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.095715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.096021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.096287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.096319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.096525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.096761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.096790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.096967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.097110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.097123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.097291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.097528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.097558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.097745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.098045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.098083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.098265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.098539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.098568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.098736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.098942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.098971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.099218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.099399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.099429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.099692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.099941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.099970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.100232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.100370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.100381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.100577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.100781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.100792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.100959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.101130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.101141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.101288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.101473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.101503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.101686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.101925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.101936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.102091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.102299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.102329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.102607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.102797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.102826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.103026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.103293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.103324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.103628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.103869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.103899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.104176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.104479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.104509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.104770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.105091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.996 [2024-04-24 10:28:39.105122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.996 qpair failed and we were unable to recover it. 00:33:25.996 [2024-04-24 10:28:39.105426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.105620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.105649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.105896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.106192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.106223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.106483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.106727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.106756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.106952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.107204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.107235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.107485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.107761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.107791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.108083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.108420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.108449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.108709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.108895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.108924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.109175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.109432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.109462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.109707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.109945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.109956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.110155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.110376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.110405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.110646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.110882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.110911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.111155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.111475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.111504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.111754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.112088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.112119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.112312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.112498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.112526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.112778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.113023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.113052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.113366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.113713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.113743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.114028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.114222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.114253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.114491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.114684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.114713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.115048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.115303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.115314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.115527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.115611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.115621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.115837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.116059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.116102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.116414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.116664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.116693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.116968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.117218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.117229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.117451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.117674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.117685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.117912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.118058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.118074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.118327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.118663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.118691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.118900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.119203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.119234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.119422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.119668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.119698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.997 [2024-04-24 10:28:39.119944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.120176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.997 [2024-04-24 10:28:39.120207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.997 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.120502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.120799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.120828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.121001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.121334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.121365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.121695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.121856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.121885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.122132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.122347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.122358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.122564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.122705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.122716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.122921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.123162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.123174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.123377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.123525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.123536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.123746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.123972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.123983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.124196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.124351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.124362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.124637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.124789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.124800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.125001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.125134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.125146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.125347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.125486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.125497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.125706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.125830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.125841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.125975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.126290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.126301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.126428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.126710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.126721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.126948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.127107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.127118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.127332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.127472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.127483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.127753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.127857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.127887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.128144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.128324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.128353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.128537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.128863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.128892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.129105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.129360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.129389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.129534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.129782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.129811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.129939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.130122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.130153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.130406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.130673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.130702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.130979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.131316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.131348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.131585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.131760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.131789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.131983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.132121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.132151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.132408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.132541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.132571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.998 [2024-04-24 10:28:39.132900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.133081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.998 [2024-04-24 10:28:39.133093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.998 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.133318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.133454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.133464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.133672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.133886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.133897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.134105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.134310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.134340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.134583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.134763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.134793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.134973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.135207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.135238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.135507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.135747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.135777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.135956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.136245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.136276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.136532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.136714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.136743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.136988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.137262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.137294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.137552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.137700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.137729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.138049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.138251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.138282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.138540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.138707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.138737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.138979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.139211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.139242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.139568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.139803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.139833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.139988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.140210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.140241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.140520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.140716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.140745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.141014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.141194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.141205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.141424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.141628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.141657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.141850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.142099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.142130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.142377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.142554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.142583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.142860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.143177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.143188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.143413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.143674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.143704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.143958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.144199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.144211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.144502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.144669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.144699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.144889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.145136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.145167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.145426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.145655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.145685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.145921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.146166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.146178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.146433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.146614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.146643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:25.999 qpair failed and we were unable to recover it. 00:33:25.999 [2024-04-24 10:28:39.146822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.147066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.999 [2024-04-24 10:28:39.147118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.147384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.147582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.147611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.147862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.148049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.148088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.148330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.148548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.148578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.148758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.149002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.149032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.149218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.149451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.149481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.149817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.150065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.150107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.150435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.150680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.150709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.150969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.151267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.151299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.151551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.151809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.151839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.152045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.152385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.152420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.152754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.153053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.153092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.153375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.153562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.153591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.153878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.154112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.154143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.154396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.154629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.154658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.154912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.155213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.155244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.155548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.155780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.155810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.156060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.156325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.156336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.156508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.156726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.156756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.156995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.157241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.157272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.157595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.157833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.157868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.158066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.158271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.158300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.158635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.158873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.158902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.159184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.159430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.159460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.159762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.160004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.160015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.000 qpair failed and we were unable to recover it. 00:33:26.000 [2024-04-24 10:28:39.160310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.000 [2024-04-24 10:28:39.160581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.160610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.160798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.160989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.161018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.161287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.161545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.161574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.161902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.162226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.162258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.162599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.162856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.162886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.163137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.163465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.163500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.163840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.164143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.164173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.164432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.164560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.164589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.164897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.165219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.165250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.165556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.165803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.165832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.166074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.166330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.166359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.166617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.166879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.166908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.167026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.167167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.167178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.167486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.167668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.167697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.167955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.168144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.168156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.168311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.168585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.168614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.168924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.169200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.169231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.169479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.169777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.169806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.170107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.170448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.170481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.170792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.171092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.171123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.171403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.171648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.171677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.171868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.172181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.172193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.172365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.172713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.172742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.172944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.173199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.173242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.173453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.173619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.173631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.173843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.174106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.174136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.174406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.174570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.174600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.174857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.175185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.175216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.175466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.175786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.001 [2024-04-24 10:28:39.175815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.001 qpair failed and we were unable to recover it. 00:33:26.001 [2024-04-24 10:28:39.176067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.176416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.176445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.176775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.177022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.177052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.177323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.177514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.177543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.177893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.178185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.178196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.178458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.178627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.178638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.178837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.179056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.179098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.179316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.179566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.179597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.179797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.180118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.180149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.180418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.180678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.180707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.180973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.181227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.181258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.181457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.181758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.181787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.181993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.182248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.182279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.182588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.182888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.182917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.183251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.183575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.183604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.183860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.184093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.184125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.184379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.184615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.184644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.184883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.185198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.185230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.185489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.185675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.185705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.186010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.186128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.186140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.186288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.186501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.186512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.186661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.186804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.186846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.187098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.187355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.187394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.187567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.187864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.187894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.188225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.188394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.188423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.188615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.188882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.188911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.189240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.189459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.189488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.189819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.190091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.190121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.190426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.190749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.190779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.191088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.191272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.002 [2024-04-24 10:28:39.191283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.002 qpair failed and we were unable to recover it. 00:33:26.002 [2024-04-24 10:28:39.191544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.191751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.191761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.191957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.192116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.192148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.192457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.192788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.192817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.193087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.193322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.193351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.193552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.193800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.193830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.194083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.194368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.194398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.194635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.194820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.194849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.195148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.195359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.195370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.195634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.195788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.195798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.196000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.196156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.196188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.196382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.196629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.196658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.196909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.197238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.197268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.197509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.197775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.197804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.198059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.198380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.198392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.198600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.198887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.198917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.199167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.199367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.199404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.199613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.199841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.199870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.200117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.200387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.200398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.200531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.200629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.200641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.200848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.201066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.201105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.201287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.201519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.201549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.201790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.201984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.202014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.202253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.202522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.202550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.202710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.202977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.203007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.203276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.203495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.203505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.203732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.204008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.204037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.204327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.204651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.204681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.205037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.205242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.205273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.003 [2024-04-24 10:28:39.205521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.205715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.003 [2024-04-24 10:28:39.205745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.003 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.206052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.206307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.206338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.206526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.206839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.206869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.207118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.207420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.207450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.207695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.207951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.207980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.208333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.208572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.208601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.208865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.209106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.209117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.209325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.209597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.209626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.209881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.210136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.210148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.210359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.210569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.210580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.210740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.210941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.210970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.211222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.211389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.211418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.211606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.211777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.211807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.212114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.212434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.212464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.212768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.213088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.213119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.213310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.213547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.213557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.213770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.213910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.213920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.214232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.214459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.214489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.214696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.214973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.215003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.215303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.215522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.215552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.215829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.216152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.216164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.216382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.216701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.216730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.217032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.217159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.217189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.217494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.217793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.217823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.218151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.218393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.218404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.218679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.218997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.219027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.219311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.219562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.219592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.219870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.220184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.220196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.220437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.220734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.220763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.004 qpair failed and we were unable to recover it. 00:33:26.004 [2024-04-24 10:28:39.220971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.004 [2024-04-24 10:28:39.221153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.221164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.221316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.221482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.221494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.221708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.221874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.221903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.222106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.222314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.222325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.222546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.222718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.222748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.222940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.223262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.223293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.223538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.223684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.223695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.223982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.224217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.224248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.224572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.224805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.224834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.225024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.225353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.225384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.225710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.225947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.225977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.226303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.226664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.226694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.226957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.227196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.227207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.227498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.227728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.227757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.228091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.228331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.228361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.228627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.228895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.228924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.229180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.229486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.229517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.229770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.230097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.230128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.230332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.230513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.230543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.230848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.231151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.231182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.231518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.231803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.231832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.232021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.232280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.232316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.232564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.232812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.232841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.233169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.233471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.233502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.005 [2024-04-24 10:28:39.233777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.234021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.005 [2024-04-24 10:28:39.234051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.005 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.234326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.234590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.234619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.234923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.235135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.235166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.235357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.235653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.235683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.235920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.236167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.236179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.236396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.236542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.236553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.236838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.237100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.237131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.237414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.237582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.237616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.237887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.238152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.238163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.238322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.238604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.238634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.238850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.239089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.239120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.239398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.239665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.239695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.240024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.240282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.240312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.240439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.240647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.240677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.240927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.241117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.241148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.241447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.241709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.241738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.241992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.242218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.242229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.242394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.242571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.242605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.242872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.243169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.243200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.243449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.243686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.243716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.243966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.244142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.244174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.244374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.244556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.244585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.244831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.245099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.245131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.245372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.245551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.245581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.245817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.246067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.246120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.246363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.246679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.246709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.246959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.247149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.247179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.247482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.247729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.247764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.248087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.248295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.248306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.006 [2024-04-24 10:28:39.248527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.248770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.006 [2024-04-24 10:28:39.248801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.006 qpair failed and we were unable to recover it. 00:33:26.007 [2024-04-24 10:28:39.248974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.249283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.249314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.007 qpair failed and we were unable to recover it. 00:33:26.007 [2024-04-24 10:28:39.249640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.249839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.249851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.007 qpair failed and we were unable to recover it. 00:33:26.007 [2024-04-24 10:28:39.250065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.250282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.250313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.007 qpair failed and we were unable to recover it. 00:33:26.007 [2024-04-24 10:28:39.250564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.250796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.250825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.007 qpair failed and we were unable to recover it. 00:33:26.007 [2024-04-24 10:28:39.250959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.251256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.251267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.007 qpair failed and we were unable to recover it. 00:33:26.007 [2024-04-24 10:28:39.251472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.251671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.007 [2024-04-24 10:28:39.251682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.007 qpair failed and we were unable to recover it. 00:33:26.007 [2024-04-24 10:28:39.251900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.252049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.252060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.320 qpair failed and we were unable to recover it. 00:33:26.320 [2024-04-24 10:28:39.252275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.252508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.252523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.320 qpair failed and we were unable to recover it. 00:33:26.320 [2024-04-24 10:28:39.252772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.252999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.253012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.320 qpair failed and we were unable to recover it. 00:33:26.320 [2024-04-24 10:28:39.253232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.253437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.253449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.320 qpair failed and we were unable to recover it. 00:33:26.320 [2024-04-24 10:28:39.253671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.253814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.253825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.320 qpair failed and we were unable to recover it. 00:33:26.320 [2024-04-24 10:28:39.253974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.254171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.320 [2024-04-24 10:28:39.254183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.320 qpair failed and we were unable to recover it. 00:33:26.320 [2024-04-24 10:28:39.254456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.254689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.254700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.254853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.255011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.255022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.255216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.255414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.255425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.255686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.255812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.255823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.255968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.256127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.256139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.256345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.256552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.256563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.256713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.256932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.256943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.257086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.257224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.257236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.257384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.257494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.257505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.257602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.257729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.257739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.258000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.258135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.258146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.258285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.258424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.258435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.258532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.258793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.258805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.259092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.259245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.259256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.259384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.259595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.259606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.259866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.259954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.259965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.260120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.260337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.260348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.260548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.260767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.260778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.260993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.261192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.261203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.261424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.261713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.261725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.261952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.262219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.262231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.262424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.262578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.262589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.262760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.262995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.263006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.263251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.263475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.263486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.263693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.264016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.264028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.264288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.264514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.264525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.264739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.264967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.264978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.265131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.265282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.265293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.321 qpair failed and we were unable to recover it. 00:33:26.321 [2024-04-24 10:28:39.265503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.265717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.321 [2024-04-24 10:28:39.265727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.265883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.266015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.266026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.266168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.266364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.266376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.266638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.266899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.266910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.267066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.267309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.267320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.267537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.267737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.267748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.268056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.268193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.268205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.268468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.268682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.268693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.268910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.269132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.269144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.269346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.269503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.269514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.269673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.269939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.269950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.270114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.270326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.270338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.270484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.270771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.270782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.270997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.271221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.271233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.271383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.271582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.271592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.271787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.271990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.272017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.272160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.272448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.272459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.272655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.272939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.272951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.273068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.273287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.273299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.273459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.273744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.273755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.274065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.274356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.274367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.274509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.274714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.274725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.274956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.275109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.275121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.275349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.275494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.275505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.275635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.275850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.275861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.276149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.276370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.276381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.276532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.276802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.276813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.277048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.277271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.277282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.277503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.277711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.277722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.322 qpair failed and we were unable to recover it. 00:33:26.322 [2024-04-24 10:28:39.277938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.322 [2024-04-24 10:28:39.278150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.278161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.278308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.278453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.278464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.278605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.278867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.278878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.279079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.279344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.279355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.279569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.279775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.279786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.279959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.280180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.280191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.280394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.280605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.280616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.280813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.280979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.280990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.281136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.281287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.281298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.281505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.281795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.281806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.282018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.282175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.282187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.282348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.282609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.282620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.282885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.283048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.283060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.283263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.283473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.283484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.283791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.283951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.283962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.284157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.284296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.284307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.284437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.284582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.284593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.284801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.284962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.284973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.285170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.285365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.285376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.285517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.285776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.285787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.286006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.286171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.286182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.286377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.286674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.286703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.286942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.287197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.287228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.287488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.287684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.287695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.287892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.288040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.288050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.288342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.288550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.288560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.288753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.288963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.288974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.289253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.289517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.289528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.289752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.289967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.289977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.323 qpair failed and we were unable to recover it. 00:33:26.323 [2024-04-24 10:28:39.290106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.323 [2024-04-24 10:28:39.290252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.290263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.290526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.290669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.290680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.290890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.291025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.291037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.291267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.291429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.291440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.291645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.291943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.291954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.292110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.292268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.292279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.292415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.292569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.292580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.292775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.292933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.292944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.293093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.293171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.293182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.293313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.293574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.293585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.293715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.293927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.293938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.294091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.294297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.294308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.294533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.294750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.294760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.294961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.295166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.295178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.295462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.295606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.295617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.295773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.295913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.295924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.296040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.296185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.296196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.296478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.296619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.296630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.296773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.297082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.297094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.297296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.297555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.297566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.297693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.297855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.297867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.298129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.298396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.298407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.298539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.298677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.298688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.298830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.298970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.298981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.299176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.299394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.299405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.299615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.299820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.299832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.300054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.300345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.300356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.300449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.300608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.300619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.300791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.301028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.324 [2024-04-24 10:28:39.301039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.324 qpair failed and we were unable to recover it. 00:33:26.324 [2024-04-24 10:28:39.301250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.301467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.301479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.301741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.301945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.301958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.302107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.302302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.302314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.302472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.302722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.302734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.303022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.303287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.303299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.303516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.303765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.303775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.303991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.304271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.304283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.304491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.304774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.304785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.304998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.305190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.305202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.305497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.305812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.305841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.306098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.306352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.306381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.306685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.306922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.306956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.307268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.307429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.307457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.307663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.307857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.307886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.308147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.308346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.308382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.308605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.308734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.308745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.308960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.309184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.309216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.309466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.309717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.309746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.309939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.310191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.310222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.310484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.310817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.310847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.311035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.311346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.311377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.311647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.311836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.311871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.325 [2024-04-24 10:28:39.312152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.312450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.325 [2024-04-24 10:28:39.312480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.325 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.312689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.312920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.312949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.313207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.313452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.313482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.313733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.313859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.313888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.314107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.314385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.314414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.314712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.314864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.314875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.315023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.315136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.315147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.315355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.315549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.315579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.315861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.316102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.316133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.316372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.316619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.316632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.316832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.317049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.317091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.317281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.317544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.317555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.317728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.317963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.317992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.318266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.318522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.318551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.318730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.318999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.319028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.319284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.319467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.319478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.319694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.319871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.319900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.320160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.320405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.320435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.320687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.320963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.320991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.321186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.321370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.321381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.321657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.321843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.321873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.322081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.322268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.322297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.322538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.322808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.322838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.323024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.323264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.323295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.323607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.323822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.323852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.324050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.324389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.324420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.324619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.324859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.324888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.325088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.325358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.325388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.325627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.325814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.325844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.326 qpair failed and we were unable to recover it. 00:33:26.326 [2024-04-24 10:28:39.326026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.326 [2024-04-24 10:28:39.326369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.326381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.326577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.326850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.326879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.327122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.327367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.327396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.327670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.327924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.327954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.328142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.328472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.328501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.328846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.329106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.329137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.329412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.329595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.329624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.329887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.330189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.330220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.330541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.330778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.330807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.331134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.331389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.331419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.331689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.331998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.332027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.332230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.332561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.332590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.332787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.333026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.333055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.333317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.333642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.333672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.333873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.334113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.334145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.334351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.334648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.334678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.334996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.335237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.335269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.335489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.335616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.335646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.335948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.336197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.336228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.336488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.336713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.336743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.336981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.337167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.337198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.337461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.337645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.337674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.337922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.338155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.338185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.338373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.338601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.338631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.338823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.339057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.339099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.339366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.339623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.339653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.339956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.340224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.340255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.340514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.340757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.340786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.340906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.341153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.341184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.327 qpair failed and we were unable to recover it. 00:33:26.327 [2024-04-24 10:28:39.341362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.327 [2024-04-24 10:28:39.341543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.341572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.341838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.342063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.342081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.342249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.342450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.342479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.342809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.342989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.343019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.343268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.343580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.343591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.343730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.343965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.343976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.344254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.344420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.344450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.344732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.345034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.345063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.345257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.345494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.345523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.345759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.345982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.346011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.346266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.346580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.346591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.346853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.347131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.347143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.347365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.347554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.347583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.347836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.348019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.348048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.348338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.348515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.348526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.348763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.348971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.349000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.349245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.349495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.349525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.349854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.350035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.350065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.350360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.350559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.350590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.350839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.351049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.351088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.351368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.351603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.351632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.351819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.352108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.352139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.352401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.352643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.352673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.352923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.353174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.353205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.353476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.353686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.353697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.353905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.354067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.354083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.354249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.354404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.354415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.354623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.354847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.354876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.355064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.355387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.355398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.328 qpair failed and we were unable to recover it. 00:33:26.328 [2024-04-24 10:28:39.355553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.328 [2024-04-24 10:28:39.355872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.355902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.356245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.356423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.356453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.356655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.356902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.356932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.357180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.357374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.357404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.357699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.357955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.357985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.358165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.358487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.358517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.358708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.358976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.359006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.359270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.359509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.359539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.359712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.359950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.359979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.360176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.360373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.360403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.360577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.360924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.360954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.361204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.361397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.361427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.361598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.361894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.361924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.362160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.362408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.362419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.362624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.362840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.362851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.363077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.363285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.363296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.363411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.363642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.363653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.363950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.364117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.364147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.364346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.364608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.364637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.364947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.365198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.365229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.365557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.365728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.365758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.366024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.366243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.366274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.366527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.366710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.366740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.367034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.367232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.367263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.329 [2024-04-24 10:28:39.367516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.367765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.329 [2024-04-24 10:28:39.367775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.329 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.367990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.368093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.368121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.368325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.368532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.368543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.368691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.368977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.369007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.369265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.369529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.369558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.369864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.370096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.370127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.370316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.370590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.370619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.370856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.371102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.371133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.371473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.371626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.371636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.371791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.371965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.371995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.372259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.372558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.372587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.372768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.373104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.373136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.373403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.373615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.373644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.373880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.374036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.374066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.374280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.374526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.374555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.374739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.374932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.374961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.375220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.375521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.375551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.375788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.376081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.376112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.376296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.376623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.376653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.376902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.377234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.377271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.377521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.377767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.377796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.378033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.378234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.378285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.378597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.378782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.378812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.379056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.379254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.379284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.379557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.379744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.379773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.380034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.380280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.380310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.380615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.380872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.380901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.381084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.381259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.381289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.381438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.381687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.381717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.330 qpair failed and we were unable to recover it. 00:33:26.330 [2024-04-24 10:28:39.381983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.330 [2024-04-24 10:28:39.382291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.382327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.382537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.382792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.382821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.383011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.383177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.383209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.383475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.383720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.383749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.384098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.384291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.384321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.384476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.384677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.384707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.384946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.385253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.385284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.385483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.385720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.385749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.385974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.386255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.386286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.386471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.386789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.386818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.387017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.387316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.387352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.387660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.388004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.388033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.388241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.388439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.388468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.388779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.389031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.389060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.389263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.389508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.389538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.389770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.389969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.389998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.390245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.390441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.390471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.390709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.390972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.391001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.391253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.391399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.391428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.391687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.391925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.391954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.392125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.392429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.392464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.392708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.392951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.392980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.393235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.393566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.393596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.393900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.394153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.394185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.394432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.394693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.394723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.394954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.395102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.395133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.395385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.395639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.395668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.395883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.396090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.396118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.396336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.396486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.396515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.331 qpair failed and we were unable to recover it. 00:33:26.331 [2024-04-24 10:28:39.396703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.331 [2024-04-24 10:28:39.396945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.396974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.397274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.397529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.397539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.397757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.397972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.397982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.398122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.398345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.398374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.398710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.398893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.398923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.399185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.399435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.399463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.399656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.399909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.399919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.400052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.400226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.400256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.400595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.400774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.400803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.401045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.401305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.401335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.401643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.401874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.401903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.402109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.402358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.402388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.402517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.402774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.402804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.403156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.403409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.403439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.403561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.403747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.403777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.404041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.404343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.404374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.404663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.404847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.404877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.405064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.405304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.405333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.405584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.405737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.405766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.406096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.406400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.406430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.406626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.406925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.406954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.407282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.407619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.407649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.407852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.408177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.408208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.408444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.408687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.408716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.409020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.409263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.409294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.409487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.409774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.409804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.410083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.410225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.410254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.410504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.410755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.410784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.411097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.411347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.411377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.332 qpair failed and we were unable to recover it. 00:33:26.332 [2024-04-24 10:28:39.411708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.332 [2024-04-24 10:28:39.411939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.411949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.412117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.412356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.412386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.412578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.412877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.412907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.413154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.413423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.413453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.413764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.414035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.414063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.414242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.414436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.414466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.414665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.414814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.414824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.414964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.415271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.415304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.415646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.415959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.415989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.416241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.416571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.416600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.416871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.417193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.417224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.417435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.417740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.417769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.417999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.418292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.418323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.418596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.418864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.418875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.419090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.419347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.419376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.419629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.419899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.419932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.420063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.420333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.420363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.420555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.420802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.420832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.421021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.421358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.421389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.421644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.421826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.421856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.422035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.422248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.422259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.422399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.422613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.422643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.422948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.423186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.423217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.423553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.423849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.423878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.424125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.424368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.424397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.424648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.424959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.424989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.425182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.425429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.425458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.425757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.425967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.425997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.426327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.426566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.333 [2024-04-24 10:28:39.426595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.333 qpair failed and we were unable to recover it. 00:33:26.333 [2024-04-24 10:28:39.426919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.427164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.427195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.427391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.427690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.427719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.427973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.428231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.428262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.428465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.428708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.428746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.428966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.429172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.429183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.429329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.429468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.429498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.429833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.430137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.430168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.430497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.430751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.430781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.431116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.431325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.431336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.431542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.431756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.431785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.431964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.432204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.432234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.432480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.432731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.432742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.433027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.433224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.433255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.433522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.433692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.433720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.433980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.434171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.434203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.434457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.434785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.434815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.435001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.435236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.435267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.435474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.435678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.435707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.435922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.436116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.436147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.436387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.436645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.436674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.436956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.437269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.437299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.437481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.437729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.437758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.334 qpair failed and we were unable to recover it. 00:33:26.334 [2024-04-24 10:28:39.437990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.334 [2024-04-24 10:28:39.438279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.438310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.438617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.438864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.438892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.439220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.439470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.439499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.439743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.439989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.440019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.440299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.440557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.440587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.440855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.441052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.441063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.441224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.441471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.441501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.441700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.441909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.441938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.442279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.442465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.442495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.442803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.442983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.443013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.443264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.443513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.443543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.443794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.444061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.444100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.444288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.444549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.444585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.444793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.444996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.445007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.445141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.445381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.445410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.445718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.445975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.446004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.446290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.446481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.446510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.446841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.447031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.447060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.447317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.447641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.447670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.447916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.448168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.448199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.448435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.448606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.448635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.448880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.449086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.449118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.449371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.449560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.449589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.449899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.450059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.450073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.450289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.450457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.450487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.450755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.450992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.451022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.451237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.451538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.451568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.451765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.451964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.451999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.335 [2024-04-24 10:28:39.452212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.452544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.335 [2024-04-24 10:28:39.452573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.335 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.452761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.452956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.452986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.453208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.453412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.453441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.453774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.453966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.453996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.454183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.454374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.454404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.454668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.454959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.454988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.455277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.455476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.455505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.455775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.455909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.455939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.456191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.456432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.456462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.456780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.457000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.457010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.457271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.457461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.457490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.457795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.457981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.457992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.458184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.458382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.458411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.458750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.458995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.459025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.459343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.459530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.459565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.459888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.460082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.460114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.460420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.460655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.460684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.460879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.461055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.461065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.461217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.461356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.461386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.461716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.461984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.462014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.462153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.462386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.462415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.462719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.463030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.463060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.463319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.463682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.463711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.463961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.464148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.464160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.464428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.464773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.464808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.465153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.465390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.465419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.465668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.465917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.465947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.466251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.466520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.466549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.466805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.467126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.467158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.336 qpair failed and we were unable to recover it. 00:33:26.336 [2024-04-24 10:28:39.467433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.467625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.336 [2024-04-24 10:28:39.467655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.467847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.468112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.468144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.468451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.468628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.468657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.468890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.469092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.469123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.469385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.469628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.469658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.469842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.470030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.470064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.470259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.470583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.470612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.470879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.471126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.471137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.471355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.471522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.471533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.471678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.471974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.472004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.472240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.472488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.472517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.472869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.473111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.473122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.473344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.473478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.473507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.473754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.473999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.474029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.474308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.474499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.474528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.474862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.475015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.475028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.475147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.475354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.475365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.475495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.475662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.475691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.475897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.476130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.476162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.476452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.476697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.476726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.477033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.477358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.477388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.477664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.477908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.477919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.478128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.478323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.478335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.478534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.478695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.478725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.478922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.479193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.479224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.479404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.479702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.479731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.479909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.480132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.480163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.480339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.480610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.480640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.480883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.481190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.481221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.337 qpair failed and we were unable to recover it. 00:33:26.337 [2024-04-24 10:28:39.481525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.337 [2024-04-24 10:28:39.481849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.481879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.482064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.482323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.482352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.482550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.482894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.482923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.483126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.483368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.483397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.483589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.483847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.483877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.484206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.484455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.484485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.484730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.485031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.485061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.485384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.485626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.485655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.485910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.486055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.486066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.486290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.486589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.486619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.486941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.487180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.487192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.487391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.487591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.487621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.487901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.488080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.488111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.488418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.488653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.488682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.488985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.489246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.489277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.489460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.489651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.489681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.489804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.489992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.490002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.490247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.490465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.490494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.490791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.491040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.491078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.491419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.491628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.491657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.491850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.492126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.492157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.492356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.492599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.492628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.492932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.493177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.493209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.493400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.493699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.493728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.493988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.494164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.494195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.494451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.494786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.494816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.495122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.495316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.495345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.495610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.495848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.495878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.496023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.496239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.496271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.338 qpair failed and we were unable to recover it. 00:33:26.338 [2024-04-24 10:28:39.496528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.338 [2024-04-24 10:28:39.496723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.496752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.497081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.497237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.497248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.497538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.497801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.497811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.498020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.498247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.498278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.498486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.498816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.498845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.499042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.499210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.499240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.499494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.499728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.499757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.500062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.500369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.500400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.500656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.500922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.500957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.501236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.501490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.501520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.501789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.502026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.502055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.502283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.502411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.502422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.502624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.502858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.502887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.503086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.503274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.503303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.503560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.503797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.503827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.504101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.504413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.504443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.504751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.504947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.504976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.505240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.505560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.505589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.505911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.506104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.506134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.506327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.506582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.506611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.506805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.506990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.507019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.507266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.507587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.507617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.507943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.508190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.508220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.508569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.508806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.508841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.339 qpair failed and we were unable to recover it. 00:33:26.339 [2024-04-24 10:28:39.509055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.339 [2024-04-24 10:28:39.509360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.509391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.509653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.509961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.509990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.510234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.510496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.510526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.510855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.511104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.511134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.511329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.511567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.511596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.511780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.512026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.512056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.512326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.512510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.512539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.512731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.513030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.513059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.513322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.513505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.513535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.513802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.513917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.513947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.514187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.514500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.514530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.514739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.515041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.515079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.515355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.515538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.515567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.515756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.516000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.516029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.516234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.516419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.516447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.516690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.516875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.516905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.517152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.517356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.517386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.517693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.517932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.517961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.518269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.518525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.518554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.518826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.519074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.519085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.519295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.519460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.519489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.519688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.519941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.519971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.520106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.520261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.520272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.520473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.520691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.520721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.520978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.521205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.521236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.521518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.521791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.521820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.522090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.522340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.522370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.522563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.522800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.522829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.523084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.523256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.340 [2024-04-24 10:28:39.523285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.340 qpair failed and we were unable to recover it. 00:33:26.340 [2024-04-24 10:28:39.523473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.523705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.523734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.523985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.524233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.524244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.524440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.524728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.524758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.525110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.525355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.525384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.525577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.525775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.525804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.526132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.526379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.526409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.526596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.526828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.526858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.527116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.527286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.527316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.527571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.527768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.527798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.528106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.528345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.528355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.528556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.528828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.528857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.529108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.529337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.529348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.529572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.529718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.529729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.529958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.530182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.530213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.530462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.530760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.530789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.531096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.531418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.531448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.531775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.531979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.532009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.532262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.532447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.532477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.532732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.532864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.532894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.533148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.533363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.533393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.533719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.534021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.534050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.534339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.534507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.534537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.534738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.534982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.535011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.535261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.535455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.535484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.535654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.535995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.536024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.536297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.536478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.536508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.536729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.536917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.536947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.537209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.537448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.537478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-04-24 10:28:39.537740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.537925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.341 [2024-04-24 10:28:39.537964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.538162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.538252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.538263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.538468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.538672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.538683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.538975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.539119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.539131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.539267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.539426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.539437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.539645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.539981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.540003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.540215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.540340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.540351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.540560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.540757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.540770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.541012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.541210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.541242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.541548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.541748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.541777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.542019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.542236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.542248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.542389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.542545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.542589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.542834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.543022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.543051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.543371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.543584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.543596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.543816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.544019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.544049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.544216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.544352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.544364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.544525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.544734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.544745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.544949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.545171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.545185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.545334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.545517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.545546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.545787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.546021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.546050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.546243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.546493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.546523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.546776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.546974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.547003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.547260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.547504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.547533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.547709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.547955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.547984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.548236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.548446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.548474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.548653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.548893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.548922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.549170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.549320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.549349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.549538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.549778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.549813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.550059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.550286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.550317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-04-24 10:28:39.550497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.550639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.342 [2024-04-24 10:28:39.550668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.550998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.551199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.551230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.551467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.551636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.551665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.551976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.552252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.552283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.552536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.552793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.552822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.553008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.553180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.553212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.553448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.553764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.553793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.554101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.554382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.554411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.554654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.554825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.554855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.555047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.555236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.555248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.555395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.555658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.555687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.555945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.556176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.556187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.556379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.556608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.556638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.556845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.557022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.557052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.557220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.557424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.557454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.557712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.557871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.557881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.558015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.558222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.558253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.558427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.558560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.558589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.558821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.559077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.559089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.559345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.559631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.559642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.559811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.560085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.560116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.560378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.560621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.560650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.560862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.561055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.561108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.561389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.561654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.561684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.561991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.562225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.562256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.343 [2024-04-24 10:28:39.562522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.562666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.343 [2024-04-24 10:28:39.562677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.562756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.563045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.563056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.563226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.563494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.563505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.563718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.563936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.563947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.564148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.564345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.564356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.564564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.564703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.564715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.564985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.565129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.565141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.565404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.565542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.565555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.565695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.565840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.565851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.565994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.566135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.566146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.566301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.566438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.566450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.566595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.566911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.566922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.567058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.567225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.567237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.567426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.567580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.567591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.567740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.567882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.567893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.568107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.568254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.568264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.568490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.568774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.568785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.568987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.569199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.569210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.569414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.569607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.569618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.569766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.569997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.570008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.570236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.570390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.570401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.570665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.570813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.570823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.571020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.571216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.571227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.571432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.571585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.571596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.571743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.571899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.571910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.572205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.572380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.572391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.572537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.572667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.572678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.572898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.573055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.573066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.573276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.573405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.573416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.344 qpair failed and we were unable to recover it. 00:33:26.344 [2024-04-24 10:28:39.573615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.573808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.344 [2024-04-24 10:28:39.573819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.574023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.574155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.574167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.574313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.574513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.574524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.574754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.574902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.574913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.575022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.575170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.575181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.575326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.575605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.575616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.575890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.576158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.576170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.576391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.576612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.576623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.576787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.576896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.576907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.577049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.577280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.577291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.577440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.577635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.577646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.577808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.577956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.577967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.578173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.578299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.578310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.578461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.578596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.578608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.578895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.579041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.579053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.579262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.579413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.579424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.579623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.579821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.579832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.579966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.580118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.580131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.580291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.580443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.580454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.580660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.580870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.580881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.581113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.581308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.581320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.581468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.581685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.581696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.581835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.582039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.582066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.582288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.582559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.582570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.582789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.583000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.583011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.583213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.583437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.583448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.583596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.583705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.583717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.583887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.583969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.583980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.584123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.584321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.584332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.345 qpair failed and we were unable to recover it. 00:33:26.345 [2024-04-24 10:28:39.584557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.345 [2024-04-24 10:28:39.584774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.346 [2024-04-24 10:28:39.584786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.346 qpair failed and we were unable to recover it. 00:33:26.346 [2024-04-24 10:28:39.584938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.585119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.585132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.585358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.585528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.585540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.585735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.585893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.585915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.586045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.586258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.586270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.586414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.586619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.586630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.586760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.586893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.586905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.587053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.587261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.587273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.587403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.587605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.587617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.587882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.588089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.588101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.588366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.588566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.588578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.588757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.588965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.588976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.589094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.589286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.589297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.589561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.589723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.589734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.589908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.590053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.590065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.590217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.590413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.590424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.590570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.590715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.590728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.591027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.591236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.591248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.591367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.591569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.591580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.591844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.591980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.591991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.592239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.592401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.592413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.592554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.592706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.592717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.592849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.593013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.593025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.593278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.593479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.593489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.593701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.593845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.593856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.619 [2024-04-24 10:28:39.594136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.594374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.619 [2024-04-24 10:28:39.594386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.619 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.594581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.594728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.594739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.594947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.595091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.595103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.595308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.595548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.595560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.595781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.595925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.595936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.596082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.596233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.596245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.596438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.596653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.596663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.596894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.597124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.597135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.597266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.597396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.597406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.597494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.597621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.597632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.597783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.598004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.598015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.598279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.598363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.598374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.598507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.598702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.598712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.598855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.599059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.599075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.599207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.599288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.599299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.599430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.599689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.599700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.599843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.600055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.600066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.600210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.600337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.600348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.600501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.600665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.600676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.600812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.600973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.600984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.601201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.601353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.601364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.601512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.601711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.601724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.601992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.602147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.602158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.602293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.602580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.602591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.602717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.602851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.602862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.603068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.603218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.603229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.603422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.603517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.603528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.603657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.603772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.603784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.603981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.604198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.604209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.620 qpair failed and we were unable to recover it. 00:33:26.620 [2024-04-24 10:28:39.604416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.620 [2024-04-24 10:28:39.604627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.604638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.604770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.604916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.604927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.605057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.605200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.605214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.605346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.605490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.605501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.605630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.605842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.605854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.606048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.606181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.606192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.606323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.606448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.606459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.606586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.606715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.606726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.606925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.607148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.607160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.607263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.607405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.607416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.607565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.607779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.607790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.607945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.608156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.608168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.608279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.608474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.608487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.608576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.608732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.608743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.608893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.609021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.609032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.609226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.609352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.609363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.609515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.609664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.609675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.609820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.610036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.610047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.610259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.610471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.610482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.610703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.610908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.610919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.611130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.611347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.611358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.611514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.611655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.611666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.611876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.612082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.612096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.612309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.612600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.612611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.612774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.612903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.612914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.613097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.613227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.613238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.613453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.613596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.613608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.613809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.613955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.613966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.621 [2024-04-24 10:28:39.614093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.614240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.621 [2024-04-24 10:28:39.614251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.621 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.614385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.614669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.614680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.614874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.615007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.615018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.615147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.615413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.615424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.615630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.615855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.615865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.616064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.616209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.616220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.616480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.616675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.616686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.616891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.617098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.617110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.617312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.617491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.617502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.617644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.617908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.617919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.618046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.618254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.618266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.618497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.618660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.618671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.618800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.619023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.619034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.619254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.619462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.619473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.619688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.619830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.619840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.619996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.620132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.620144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.620342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.620481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.620492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.620689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.620820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.620832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.621051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.621360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.621372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.621599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.621806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.621817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.621977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.622173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.622185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.622455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.622672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.622683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.622949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.623240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.623251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.623382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.623532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.623544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.623697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.623969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.623980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.624132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.624286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.624298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.624457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.624595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.624606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.624892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.624988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.624999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.625205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.625426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.625437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.622 qpair failed and we were unable to recover it. 00:33:26.622 [2024-04-24 10:28:39.625638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.622 [2024-04-24 10:28:39.625777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.625789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.625984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.626230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.626242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.626494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.626703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.626714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.626866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.627128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.627140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.627401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.627684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.627695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.627899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.628113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.628125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.628343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.628497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.628509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.628723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.628860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.628871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.629103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.629415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.629426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.629621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.629853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.629865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.630069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.630365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.630377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.630585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.630785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.630796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.630991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.631279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.631291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.631574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.631737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.631748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.631964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.632113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.632125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.632389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.632497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.632508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.632706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.632812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.632824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.632969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.633104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.633116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.633324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.633546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.633557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.633775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.633923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.633934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.634089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.634296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.634308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.634510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.634655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.634666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.634796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.635075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.635086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.623 [2024-04-24 10:28:39.635223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.635511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.623 [2024-04-24 10:28:39.635522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.623 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.635688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.635841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.635852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.636019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.636228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.636239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.636523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.636667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.636678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.636803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.637019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.637030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.637239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.637454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.637465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.637698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.637908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.637921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.638143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.638347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.638358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.638523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.638790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.638802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.638956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.639109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.639120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.639323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.639608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.639620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.639828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.639967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.639978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.640207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.640354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.640365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.640520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.640646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.640658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.640864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.641125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.641137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.641283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.641434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.641445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.641651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.641846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.641857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.642014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.642232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.642243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.642465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.642663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.642674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.642980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.643183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.643194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.643405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.643581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.643593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.643883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.644089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.644101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.644423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.644601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.644612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.644773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.644928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.644939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.645136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.645428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.645439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.645702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.645847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.645858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.646064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.646287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.646299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.646563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.646774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.646785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.646987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.647248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.647260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.624 qpair failed and we were unable to recover it. 00:33:26.624 [2024-04-24 10:28:39.647461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.647667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.624 [2024-04-24 10:28:39.647678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.647886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.648103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.648115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.648316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.648548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.648560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.648824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.649056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.649067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.649281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.649427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.649438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.649739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.649934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.649945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.650243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.650409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.650420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.650511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.650662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.650674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.650890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.651122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.651133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.651274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.651554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.651565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.651772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.651910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.651921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.652118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.652253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.652264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.652530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.652768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.652780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.653046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.653192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.653204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.653469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.653637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.653648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.653848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.654060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.654076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.654280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.654441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.654453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.654594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.654798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.654809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.654965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.655251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.655262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.655533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.655789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.655800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.656084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.656304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.656314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.656601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.656886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.656898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.657134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.657360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.657371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.657577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.657843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.657855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.658083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.658361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.658372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.658526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.658730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.658740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.658911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.659108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.659119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.659329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.659546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.659557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.659760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.659989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.660000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.625 qpair failed and we were unable to recover it. 00:33:26.625 [2024-04-24 10:28:39.660295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.625 [2024-04-24 10:28:39.660508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.660519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.660681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.660968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.660979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.661179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.661340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.661351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.661552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.661702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.661713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.661846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.661979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.661990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.662134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.662284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.662295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.662571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.662715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.662726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.662954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.663092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.663104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.663395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.663532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.663544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.663766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.663971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.663983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.664202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.664487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.664499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.664662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.664886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.664896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.665029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.665238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.665250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.665461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.665672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.665683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.665996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.666282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.666293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.666429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.666652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.666665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.666937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.667078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.667090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.667287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.667499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.667510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.667774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.667934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.667945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.668236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.668435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.668446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.668645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.668874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.668884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.669113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.669424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.669436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.669648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.669779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.669791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.669988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.670252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.670263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.670470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.670676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.670687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.670846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.671053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.671066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.671369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.671591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.671602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.671810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.672023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.672034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.672177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.672445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.672457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.626 qpair failed and we were unable to recover it. 00:33:26.626 [2024-04-24 10:28:39.672666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.626 [2024-04-24 10:28:39.672860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.672871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.673083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.673295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.673306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.673449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.673649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.673660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.673865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.674086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.674098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.674237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.674376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.674387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.674620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.674826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.674838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.675047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.675283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.675297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.675468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.675685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.675696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.675850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.676005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.676016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.676236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.676497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.676508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.676719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.676914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.676925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.677222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.677533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.677544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.677690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.677933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.677944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.678179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.678379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.678391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.678564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.678846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.678856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.679017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.679178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.679190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.679483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.679771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.679784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.679915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.680063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.680080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.680298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.680497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.680508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.680668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.680928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.680939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.681202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.681409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.681421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.681571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.681766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.681776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.681980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.682141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.682153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.682292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.682425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.682436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.682641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.682905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.682916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.683141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.683342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.683353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.683496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.683574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.683585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.683729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.683989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.684000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.684209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.684426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.684438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.627 qpair failed and we were unable to recover it. 00:33:26.627 [2024-04-24 10:28:39.684597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.627 [2024-04-24 10:28:39.684932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.684943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.685179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.685445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.685456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.685679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.685819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.685831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.686026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.686243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.686255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.686477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.686681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.686692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.686850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.687010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.687020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.687229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.687432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.687442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.687645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.687923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.687935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.688080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.688240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.688250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.688464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.688690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.688701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.688843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.688995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.689006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.689151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.689442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.689452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.689675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.689940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.689951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.690169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.690407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.690418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.690570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.690786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.690797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.690940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.691148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.691160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.691356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.691553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.691564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.691794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.691992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.692003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.692294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.692423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.692435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.692658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.692861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.692872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.693105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.693302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.693314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.693585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.693785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.693796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.693953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.694162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.694174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.628 [2024-04-24 10:28:39.694458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.694603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.628 [2024-04-24 10:28:39.694614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.628 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.694830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.695041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.695053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.695349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.695479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.695490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.695753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.695960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.695971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.696130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.696342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.696352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.696509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.696726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.696737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.696940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.697202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.697213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.697373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.697641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.697652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.697905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.698063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.698082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.698367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.698575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.698587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.698801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.699033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.699045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.699265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.699412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.699423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.699540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.699803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.699814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.700023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.700233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.700244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.700380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.700592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.700604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.700803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.700964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.700975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.701186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.701422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.701433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.701664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.701924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.701935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.702129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.702280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.702291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.702497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.702760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.702771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.703008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.703150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.703171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.703385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.703523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.703535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.703680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.703891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.703902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.704048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.704270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.704282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.704566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.704780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.704792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.704940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.705079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.705091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.705312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.705573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.705584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.705788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.706090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.706102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.706248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.706510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.706521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.629 qpair failed and we were unable to recover it. 00:33:26.629 [2024-04-24 10:28:39.706669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.706886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.629 [2024-04-24 10:28:39.706897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.707134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.707343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.707354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.707598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.707810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.707821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.708105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.708305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.708315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.708515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.708728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.708739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.709001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.709223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.709235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.709377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.709574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.709586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.709794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.710001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.710012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.710225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.710441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.710452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.710719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.710916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.710927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.711137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.711362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.711373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.711682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.711898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.711909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.712170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.712386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.712397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.712546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.712670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.712680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.712967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.713115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.713127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.713328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.713542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.713554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.713755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.713914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.713925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.714156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.714439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.714450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.714652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.714817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.714828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.714979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.715243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.715255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.715412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.715625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.715635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.715852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.715989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.716000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.716216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.716425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.716436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.716599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.716863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.716874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.717164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.717430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.717441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.717591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.717818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.717829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.718106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.718262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.718274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.718584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.718735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.718746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.718878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.719140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.719151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.630 qpair failed and we were unable to recover it. 00:33:26.630 [2024-04-24 10:28:39.719421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.630 [2024-04-24 10:28:39.719657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.719669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.719802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.720001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.720012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.720297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.720510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.720521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.720719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.720954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.720966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.721193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.721460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.721471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.721688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.721903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.721914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.722112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.722404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.722416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.722638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.722845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.722857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.723006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.723152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.723164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.723427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.723621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.723632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.723836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.724040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.724051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.724294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.724525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.724536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.724771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.724969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.724980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.725126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.725347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.725358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.725565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.725773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.725784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.725985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.726194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.726206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.726512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.726652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.726663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.726893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.727166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.727178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.727343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.727540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.727551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.727863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.728018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.728029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.728181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.728399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.728410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.728613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.728747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.728758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.729020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.729160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.729173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.729385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.729603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.729614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.729825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.730022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.730033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.730298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.730506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.730518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.730662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.730884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.730895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.731039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.731186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.731197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.731408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.731580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.731591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.631 qpair failed and we were unable to recover it. 00:33:26.631 [2024-04-24 10:28:39.731724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.631 [2024-04-24 10:28:39.731940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.731951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.732167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.732310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.732322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.732538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.732738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.732749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.732973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.733182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.733194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.733328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.733460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.733472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.733742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.733964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.733986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.734131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.734339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.734350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.734547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.734767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.734778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.734920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.735075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.735088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.735358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.735643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.735654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.735802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.736008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.736018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.736228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.736518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.736529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.736745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.737005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.737017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.737153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.737306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.737318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.737520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.737663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.737674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.737960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.738191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.738203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.738467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.738753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.738764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.738969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.739176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.739188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.739393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.739625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.739638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.739816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.740007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.740019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.740305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.740440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.740451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.740652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.740914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.740925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.741083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.741295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.741306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.741597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.741752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.741763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.632 qpair failed and we were unable to recover it. 00:33:26.632 [2024-04-24 10:28:39.741990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.742149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.632 [2024-04-24 10:28:39.742160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.742354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.742549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.742560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.742819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.743023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.743034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.743302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.743512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.743523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.743682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.743889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.743902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.744102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.744311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.744339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.744492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.744763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.744774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.744967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.745177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.745189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.745351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.745502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.745513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.745823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.745960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.745971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.746125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.746291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.746303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.746589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.746730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.746741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.746877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.747002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.747014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.747287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.747432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.747444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.747592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.747718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.747729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.748017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.748222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.748234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.748363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.748504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.748516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.748711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.748855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.748866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.749064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.749288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.749299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.749514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.749671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.749682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.749959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.750194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.750206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.750403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.750611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.750622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.750830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.751050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.751061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.751204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.751419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.751430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.751637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.751771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.751782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.752046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.752309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.752320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.752471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.752672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.752683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.752948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.753153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.753165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.753385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.753592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.753604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.633 qpair failed and we were unable to recover it. 00:33:26.633 [2024-04-24 10:28:39.753818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.633 [2024-04-24 10:28:39.754040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.754051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.754279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.754429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.754439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.754655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.754919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.754930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.755164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.755374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.755385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.755534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.755821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.755832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.756029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.756238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.756249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.756396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.756592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.756603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.756733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.756926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.756938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.757260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.757465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.757477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.757704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.758004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.758015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.758232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.758493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.758504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.758736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.758946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.758957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.759220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.759369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.759381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.759596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.759810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.759820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.760081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.760290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.760302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.760457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.760603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.760614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.760754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.760882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.760893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.761166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.761363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.761374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.761665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.761817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.761828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.762133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.762421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.762433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.762648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.762815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.762826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.763057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.763269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.763280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.763440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.763526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.763537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.763679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.763836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.763848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.764124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.764417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.764428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.764666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.764814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.764826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.765045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.765245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.765256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.765468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.765700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.765711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.765930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.766106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.766118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.634 qpair failed and we were unable to recover it. 00:33:26.634 [2024-04-24 10:28:39.766324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.634 [2024-04-24 10:28:39.766545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.766557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.766830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.767051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.767062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.767229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.767520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.767531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.767820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.767946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.767957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.768176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.768452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.768464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.768668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.768907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.768918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.769217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.769367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.769379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.769516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.769777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.769788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.770000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.770236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.770248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.770458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.770584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.770595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.770832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.771005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.771016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.771237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.771394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.771405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.771603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.771756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.771767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.772050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.772341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.772353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.772508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.772794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.772804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.772885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.773028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.773038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.773243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.773456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.773468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.773713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.773855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.773867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.774003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.774155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.774166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.774375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.774638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.774649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.774845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.775058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.775074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.775181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.775377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.775388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.775601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.775814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.775825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.776049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.776264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.776276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.776447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.776611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.776622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.776820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.777100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.777112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.777320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.777466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.777478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.777625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.777836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.777848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.778063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.778222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.635 [2024-04-24 10:28:39.778234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.635 qpair failed and we were unable to recover it. 00:33:26.635 [2024-04-24 10:28:39.778514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.778738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.778749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.778986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.779145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.779156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.779299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.779460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.779471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.779705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.779900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.779911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.780079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.780232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.780243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.780442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.780688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.780699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.780861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.781001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.781012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.781215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.781480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.781491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.781775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.781930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.781940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.782082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.782285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.782297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.782564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.782765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.782776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.782967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.783163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.783174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.783407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.783615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.783627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.783863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.784088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.784100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.784321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.784454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.784464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.784728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.784873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.784884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.785038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.785299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.785310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.785472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.785775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.785786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.786002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.786212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.786224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.786395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.786659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.786671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.786800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.787065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.787081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.787318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.787514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.787526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.787685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.787823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.787834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.787974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.788178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.788190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.788329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.788539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.788550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.788759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.788925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.788935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.789225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.789480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.789491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.789798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.790009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.790020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.790235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.790387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.636 [2024-04-24 10:28:39.790398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.636 qpair failed and we were unable to recover it. 00:33:26.636 [2024-04-24 10:28:39.790556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.790816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.790828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.790978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.791271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.791283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.791479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.791687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.791698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.791916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.792112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.792123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.792281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.792419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.792431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.792626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.792900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.792911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.793067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.793237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.793248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.793344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.793547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.793558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.793703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.793914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.793925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.794147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.794302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.794314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.794577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.794707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.794718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.794923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.795081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.795092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.795274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.795537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.795548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.795842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.795999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.796010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.796144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.796406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.796417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.796629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.796781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.796792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.797009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.797141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.797153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.797359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.797621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.797631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.797836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.798039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.798050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.798271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.798562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.798574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.798839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.799125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.799136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.799355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.799568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.799579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.799839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.800033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.800044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.800358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.800619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.800630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.637 qpair failed and we were unable to recover it. 00:33:26.637 [2024-04-24 10:28:39.800788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.637 [2024-04-24 10:28:39.800945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.800956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.801163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.801309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.801320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.801542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.801773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.801784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.802010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.802225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.802237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.802372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.802590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.802600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.802804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.803010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.803023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.803171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.803318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.803329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.803614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.803898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.803909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.804117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.804268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.804279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.804480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.804581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.804592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.804743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.804830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.804842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.805054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.805274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.805286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.805584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.805811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.805822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.806022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.806315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.806329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.806531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.806824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.806835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.807117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.807274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.807286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.807495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.807654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.807665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.807810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.808076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.808088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.808227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.808373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.808384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.808603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.808746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.808757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.808964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.809258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.809269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.809490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.809642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.809654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.809812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.810009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.810021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.810115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.810281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.810292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.810575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.810790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.810801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.811017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.811236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.811250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.811491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.811734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.811745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.811946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.812179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.812190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.812425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.812638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.812649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.638 qpair failed and we were unable to recover it. 00:33:26.638 [2024-04-24 10:28:39.812863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.638 [2024-04-24 10:28:39.813089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.813101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.813305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.813463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.813474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.813686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.813924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.813936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.814196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.814336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.814348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.814578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.814778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.814789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.814998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.815283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.815294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.815509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.815664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.815675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.815944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.816151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.816162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.816319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.816529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.816540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.816742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.816890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.816901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.817116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.817312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.817323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.817466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.817667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.817678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.817838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.818059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.818076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.818224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.818433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.818444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.818605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.818885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.818897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.819095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.819298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.819309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.819462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.819616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.819627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.819763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.819977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.819988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.820127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.820349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.820360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.820559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.820701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.820712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.820860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.821177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.821189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.821383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.821531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.821542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.821683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.821892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.821903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.822123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.822278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.822290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.822522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.822719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.822730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.822930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.823089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.823101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.823336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.823567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.823577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.823779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.823936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.823947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.824211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.824427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.824437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.639 qpair failed and we were unable to recover it. 00:33:26.639 [2024-04-24 10:28:39.824594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.639 [2024-04-24 10:28:39.824792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.824803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.824954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.825102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.825114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.825324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.825472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.825483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.825679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.825887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.825898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.826122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.826287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.826298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.826507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.826707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.826718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.826928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.827079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.827091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.827232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.827377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.827388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.827630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.827833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.827843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.827973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.828122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.828134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.828268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.828477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.828489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.828778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.829037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.829048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.829194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.829353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.829364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.829574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.829783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.829795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.829937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.830086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.830098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.830248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.830398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.830410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.830553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.830758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.830770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.830900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.831175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.831187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.831361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.831587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.831598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.831798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.831949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.831961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.832172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.832375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.832386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.832595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.832811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.832822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.832955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.833198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.833211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.833430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.833585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.833596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.833737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.833932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.833943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.834210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.834349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.834360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.834507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.834709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.834720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.834933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.835137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.835148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.835356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.835560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.640 [2024-04-24 10:28:39.835572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.640 qpair failed and we were unable to recover it. 00:33:26.640 [2024-04-24 10:28:39.835780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.835931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.835942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.836169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.836376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.836387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.836657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.836805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.836816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.837090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.837289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.837300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.837498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.837694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.837705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.837840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.838047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.838058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.838275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.838420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.838431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.838558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.838712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.838723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.838930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.839157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.839169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.839324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.839521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.839533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.839740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.839948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.839960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.840115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.840257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.840268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.840476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.840765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.840776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.840907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.841129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.841140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.841421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.841689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.841700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.841843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.842054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.842065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.842308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.842467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.842478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.842740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.842889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.842901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.843098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.843238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.843249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.843461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.843607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.843619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.843793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.843899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.843910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.844073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.844294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.844305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.844520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.844739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.844750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.845010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.845159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.845171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.845377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.845600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.845611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.641 qpair failed and we were unable to recover it. 00:33:26.641 [2024-04-24 10:28:39.845826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.846111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.641 [2024-04-24 10:28:39.846122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.846289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.846488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.846500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.846652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.846862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.846874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.847022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.847228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.847240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.847376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.847579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.847591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.847804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.848046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.848058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.848214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.848358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.848372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.848593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.848826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.848838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.849052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.849221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.849233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.849388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.849539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.849550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.849738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.849903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.849914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.850067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.850233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.850245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.850376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.850529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.850541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.850669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.850770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.850782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.850944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.851142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.851155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.851395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.851703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.851714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.852027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.852318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.852330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.852482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.852641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.852652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.852876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.853042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.853053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.853298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.853508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.853519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.853658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.853826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.853837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.854056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.854214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.854226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.854471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.854617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.854628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.854846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.855045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.855056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.855283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.855427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.855439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.855655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.855821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.855832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.856052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.856263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.856274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.856471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.856666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.856678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.642 [2024-04-24 10:28:39.856965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.857116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.642 [2024-04-24 10:28:39.857127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.642 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.857329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.857474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.857485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.857780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.857932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.857944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.858153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.858349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.858361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.858598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.858730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.858742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.858954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.859098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.859110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.859275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.859474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.859485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.859755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.859969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.859980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.860195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.860403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.860414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.860646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.860764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.860775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.860990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.861140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.861152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.861422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.861560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.861572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.861787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.861941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.861952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.862097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.862316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.862327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.862536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.862748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.862758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.862901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.863136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.863147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.863359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.863629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.863642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.863848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.864049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.864061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.864346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.864561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.864573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.864775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.864993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.865004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.865211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.865433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.865444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.865659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.865923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.865935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.866083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.866302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.866313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.866582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.866780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.866791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.866946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.867094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.867107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.867321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.867544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.867555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.867688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.867834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.867849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.867994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.868261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.868274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.868510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.868799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.868811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.643 qpair failed and we were unable to recover it. 00:33:26.643 [2024-04-24 10:28:39.868970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.869116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.643 [2024-04-24 10:28:39.869128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.869392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.869604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.869614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.869821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.869966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.869977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.870211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.870422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.870433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.870633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.870763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.870775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.870928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.871138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.871150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.871290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.871485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.871496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.871707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.871917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.871930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.872083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.872277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.872288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.872574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.872787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.872798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.873028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.873236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.873248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.873496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.873760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.873772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.874037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.874254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.874266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.874427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.874648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.874659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.874949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.875096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.875108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.875326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.875486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.875496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.875705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.875852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.875864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.876126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.876336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.876351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.876499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.876661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.876672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.876820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.877094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.877106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.877377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.877532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.877543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.877685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.877899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.877910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.878195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.878466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.878478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.878626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.878755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.878766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.878961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.879177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.879189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.879349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.879564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.879575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.879776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.880093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.880105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.880252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.880468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.880480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.880619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.880747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.880758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.644 qpair failed and we were unable to recover it. 00:33:26.644 [2024-04-24 10:28:39.880896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.881099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.644 [2024-04-24 10:28:39.881111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.645 qpair failed and we were unable to recover it. 00:33:26.645 [2024-04-24 10:28:39.881252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.881471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.881482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.645 qpair failed and we were unable to recover it. 00:33:26.645 [2024-04-24 10:28:39.881681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.881822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.881833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.645 qpair failed and we were unable to recover it. 00:33:26.645 [2024-04-24 10:28:39.882056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.882212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.882225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.645 qpair failed and we were unable to recover it. 00:33:26.645 [2024-04-24 10:28:39.882455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.882665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.882676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.645 qpair failed and we were unable to recover it. 00:33:26.645 [2024-04-24 10:28:39.882829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.883107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.645 [2024-04-24 10:28:39.883119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.645 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.883331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.883526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.883537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.883677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.883895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.883906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.884100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.884311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.884322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.884474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.884634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.884645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.884842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.885000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.885011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.885153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.885288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.885299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.885600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.885759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.885770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.885926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.886060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.886074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.886240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.886385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.886396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.886657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.886856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.886867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.887008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.887158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.887169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.887376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.887662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.887673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.887829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.887968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.887979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.888198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.888394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.888405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.888694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.888829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.888841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.889054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.889321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.889333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.889528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.889796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.889808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.890087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.890232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.890243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.890371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.890572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.890583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.890719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.890863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.890874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.890963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.891156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.891167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.917 qpair failed and we were unable to recover it. 00:33:26.917 [2024-04-24 10:28:39.891314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.917 [2024-04-24 10:28:39.891457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.891468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.891676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.891818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.891830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.891992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.892202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.892213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.892407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.892606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.892617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.892850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.893093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.893105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.893251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.893416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.893427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.893626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.893850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.893861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.894096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.894379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.894391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.894635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.894759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.894770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.894982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.895191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.895203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.895472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.895606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.895617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.895860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.896007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.896018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.896221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.896518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.896529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.896771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.897011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.897022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.897300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.897577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.897588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.897740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.898001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.898013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.898323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.898558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.898570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.898800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.898933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.898944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.899149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.899415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.899426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.899686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.899984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.899995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.900285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.900455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.900465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.900732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.900932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.900943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.901168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.901310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.901321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.901405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.901676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.901687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.901894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.902031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.902042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.902306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.902447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.902458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.902720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.902916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.902927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.903136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.903287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.903298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.918 [2024-04-24 10:28:39.903447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.903653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.918 [2024-04-24 10:28:39.903664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.918 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.903871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.904085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.904098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.904247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.904477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.904487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.904697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.904838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.904849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.905058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.905285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.905297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.905412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.905626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.905638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.905802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.905950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.905962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.906118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.906329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.906340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.906593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.906876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.906887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.907027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.907183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.907195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.907338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.907493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.907504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.907714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.907979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.907990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.908133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.908350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.908362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.908507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.908657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.908668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.908944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.909088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.909100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.909263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.909476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.909487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.909628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.909828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.909840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.909938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.910078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.910089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.910284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.910504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.910516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.910729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.910935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.910946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.911203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.911367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.911378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.911574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.911820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.911831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.912041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.912190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.912201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.912410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.912547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.912558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.912703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.912916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.912928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.913132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.913283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.913295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.913458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.913670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.913680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.913856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.914073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.914085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.914301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.914499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.914511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.919 qpair failed and we were unable to recover it. 00:33:26.919 [2024-04-24 10:28:39.914775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.915035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.919 [2024-04-24 10:28:39.915046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.915245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.915448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.915460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.915551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.915755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.915765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.916053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.916213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.916224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.916362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.916575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.916587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.916762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.916912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.916923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.917120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.917413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.917424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.917567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.917645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.917655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.917808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.917949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.917960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.918114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.918210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.918221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.918434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.918634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.918645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.918843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.918996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.919007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.919229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.919514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.919525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.919745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.919953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.919964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.920217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.920409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.920420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.920614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.920826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.920837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.921048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.921267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.921279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.921542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.921672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.921683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.921881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.922018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.922030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.922250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.922453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.922465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.922544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.922749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.922760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.923036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.923168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.923180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.923402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.923610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.923621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.923858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.924064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.924080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.924219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.924419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.924430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.924717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.924927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.924939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.925153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.925374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.925386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.925540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.925753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.925764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.925908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.926224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.926236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.920 qpair failed and we were unable to recover it. 00:33:26.920 [2024-04-24 10:28:39.926446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.920 [2024-04-24 10:28:39.926660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.926671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.926780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.926926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.926937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.927140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.927282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.927294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.927504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.927711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.927721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.927944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.928171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.928182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.928386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.928481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.928492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.928807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.929020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.929032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.929253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.929420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.929431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.929646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.929855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.929866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.930157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.930421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.930432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.930644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.930906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.930917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.931184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.931342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.931353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.931431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.931640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.931651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.931779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.931995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.932006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.932294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.932491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.932502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.932664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.932817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.932828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.932955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.933179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.933192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.933430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.933679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.933690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.933889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.934037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.934048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.934255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.934415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.934426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.934645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.934856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.934868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.935008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.935217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.935229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.935327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.935525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.935536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.921 qpair failed and we were unable to recover it. 00:33:26.921 [2024-04-24 10:28:39.935680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.921 [2024-04-24 10:28:39.935873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.935885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.936151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.936355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.936365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.936575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.936717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.936728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.936954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.937160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.937175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.937313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.937510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.937521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.937734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.937946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.937956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.938111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.938322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.938333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.938563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.938775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.938785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.939005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.939206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.939217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.939357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.939558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.939570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.939849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.940062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.940076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.940174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.940405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.940416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.940706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.940848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.940860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.941002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.941131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.941143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.941357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.941619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.941630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.941834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.942121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.942133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.942343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.942577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.942588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.942853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.943142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.943154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.943302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.943544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.943556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.943786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.943982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.943993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.944205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.944422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.944433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.944587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.944800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.944811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.945119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.945334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.945345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.945551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.945760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.945771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.945970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.946175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.946187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.946416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.946636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.946647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.946804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.947006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.947018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.947214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.947454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.947466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.947642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.947791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.922 [2024-04-24 10:28:39.947802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.922 qpair failed and we were unable to recover it. 00:33:26.922 [2024-04-24 10:28:39.947942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.948089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.948100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.948312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.948529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.948541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.948766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.949058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.949073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.949276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.949477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.949489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.949778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.950045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.950056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.950223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.950423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.950435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.950587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.950721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.950732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.951021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.951227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.951238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.951383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.951586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.951597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.951803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.952033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.952045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.952373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.952581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.952592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.952793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.953025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.953036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.953297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.953509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.953520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.953734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.953995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.954007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.954238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.954387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.954398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.954661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.954820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.954832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.955032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.955308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.955320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.955592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.955866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.955877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.956079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.956392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.956404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.956678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.956888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.956899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.957099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.957375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.957387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.957606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.957759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.957771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.957978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.958198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.958210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.958477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.958693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.958704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.958908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.959061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.959083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.959378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.959612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.959623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.959838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.960150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.960162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.960368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.960512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.960543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.960869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.961114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.923 [2024-04-24 10:28:39.961144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.923 qpair failed and we were unable to recover it. 00:33:26.923 [2024-04-24 10:28:39.961346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.961673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.961702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.961983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.962256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.962287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.962489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.962838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.962867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.963182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.963531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.963561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.963877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.964140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.964173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.964427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.964726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.964756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.965093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.965421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.965450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.965782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.966019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.966030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.966331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.966485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.966497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.966767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.967004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.967015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.967211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.967477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.967506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.967828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.968083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.968115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.968442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.968676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.968706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.968965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.969196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.969208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.969353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.969657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.969686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.970010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.970349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.970380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.970561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.970796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.970825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.971083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.971373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.971404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.971731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.972027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.972058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.972403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.972648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.972678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.972937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.973135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.973166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.973475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.973792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.973822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.974196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.974395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.974423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.974689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.974890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.974920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.975248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.975583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.975613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.975943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.976199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.976230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.976429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1200 is same with the state(5) to be set 00:33:26.924 [2024-04-24 10:28:39.976882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.977188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.924 [2024-04-24 10:28:39.977225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.924 qpair failed and we were unable to recover it. 00:33:26.924 [2024-04-24 10:28:39.977524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.977767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.977798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.978131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.978365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.978380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.978553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.978823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.978861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.979170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.979393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.979423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.979641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.979897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.979927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.980165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.980386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.980401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.980562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.980864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.980895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.981152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.981478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.981509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.981838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.982177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.982209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.982537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.982780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.982810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.983081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.983390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.983421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.983726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.983979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.984009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.984292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.984563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.984594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.984925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.985191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.985222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.985425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.985724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.985753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.986001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.986327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.986359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.986708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.986981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.987010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.987284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.987578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.987592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.987903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.988120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.988136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.988354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.988650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.988680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.988881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.989166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.989197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.989453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.989774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.989804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.990082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.990410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.990440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.990747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.991106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.991138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.991397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.991698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.991727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.991985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.992205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.992236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.992562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.992898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.992928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.993258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.993585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.993616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.925 [2024-04-24 10:28:39.993954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.994257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.925 [2024-04-24 10:28:39.994289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.925 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.994478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.994759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.994790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.995044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.995404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.995436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.995804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.996106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.996137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.996458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.996659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.996688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.996942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.997263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.997294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.997634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.997972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.998003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.998262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.998519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.998550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.998795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.999113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.999144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.999398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.999597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:39.999626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:39.999953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.000291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.000322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.000648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.000912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.000948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.001278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.001476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.001491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.001736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.001888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.001903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.002113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.002361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.002377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.002551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.002810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.002825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.003041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.003272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.003288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.003455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.003692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.003708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.004006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.004230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.004245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.004446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.004697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.004718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.004894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.005117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.005134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.005377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.005665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.005681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.006000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.006256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.006272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.006515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.006764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.006778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.926 qpair failed and we were unable to recover it. 00:33:26.926 [2024-04-24 10:28:40.006993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.926 [2024-04-24 10:28:40.007272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.007289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.007586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.007883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.007898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.008142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.008413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.008428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.008655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.008889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.008904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.009135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.009438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.009455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.009675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.009951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.009966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.010259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.010486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.010500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.010799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.011020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.011035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.011200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.011448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.011463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.011746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.012029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.012060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.012424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.012709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.012740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.012951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.013225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.013241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.013404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.013617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.013632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.013798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.014054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.014098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.014362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.014548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.014579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.014939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.015195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.015227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.015536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.015709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.015738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.016008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.016340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.016371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.016628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.016961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.016992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.017323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.017579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.017609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.017862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.018191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.018223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.018549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.018816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.018847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.019129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.019376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.019392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.019695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.020019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.020050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.020310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.020556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.020586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.020785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.021051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.021091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.021341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.021642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.021672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.021948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.022222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.022254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.022524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.022826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.022861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.927 qpair failed and we were unable to recover it. 00:33:26.927 [2024-04-24 10:28:40.023171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.023423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.927 [2024-04-24 10:28:40.023453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.023788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.024117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.024148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.024407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.024701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.024732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.024976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.025274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.025290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.025599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.025824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.025839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.026063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.026213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.026229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.026531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.026803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.026818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.027029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.027199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.027214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.027461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.027675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.027690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.027933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.028151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.028169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.028383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.028610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.028626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.028780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.028936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.028950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.029198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.029441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.029456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.029608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.029901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.029916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.030141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.030370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.030385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.030663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.030950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.030964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.031204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.031430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.031444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.031677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.031892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.031906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.032179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.032495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.032509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.032812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.033056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.033076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.033241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.033400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.033414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.033531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.033816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.033830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.034101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.034331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.034345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.034571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.034797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.034812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.035134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.035305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.035319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.035615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.035863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.035877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.036181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.036392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.036407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.036685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.036911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.036926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.037172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.037444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.037458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.928 [2024-04-24 10:28:40.037687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.037897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.928 [2024-04-24 10:28:40.037912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.928 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.038142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.038416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.038431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.038703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.039000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.039015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.039320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.039561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.039576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.039892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.040133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.040148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.040428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.040721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.040736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.040946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.041190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.041205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.041367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.041569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.041584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.041877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.042103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.042118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.042333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.042541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.042555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.042766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.042929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.042944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.043266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.043558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.043573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.043905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.044111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.044126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.044421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.044711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.044726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.044941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.045229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.045244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.045470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.045753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.045768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.045988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.046273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.046289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.046581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.046877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.046892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.047128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.047441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.047456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.047721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.047932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.047946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.048250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.048458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.048472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.048687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.048934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.048950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.049165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.049414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.049430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.049714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 500513 Killed "${NVMF_APP[@]}" "$@" 00:33:26.929 [2024-04-24 10:28:40.049918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.049933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.050230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.050463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.050478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 10:28:40 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:33:26.929 [2024-04-24 10:28:40.050747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 10:28:40 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:26.929 [2024-04-24 10:28:40.051019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.051034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 10:28:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:26.929 [2024-04-24 10:28:40.051191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 10:28:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:26.929 [2024-04-24 10:28:40.051485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.051500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 10:28:40 -- common/autotest_common.sh@10 -- # set +x 00:33:26.929 [2024-04-24 10:28:40.051792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.052085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.052101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.929 qpair failed and we were unable to recover it. 00:33:26.929 [2024-04-24 10:28:40.052418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.929 [2024-04-24 10:28:40.052619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.052633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.052907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.053146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.053160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.053432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.053761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.053780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.054109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.054339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.054353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.054602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.054832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.054846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.055142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.055429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.055443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.055616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.055832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.055847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.056157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.056407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.056420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.056715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.056992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.057006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.057226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.057447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.057461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.057634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.057908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.057922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 10:28:40 -- nvmf/common.sh@469 -- # nvmfpid=501469 00:33:26.930 [2024-04-24 10:28:40.058238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 10:28:40 -- nvmf/common.sh@470 -- # waitforlisten 501469 00:33:26.930 10:28:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:26.930 [2024-04-24 10:28:40.058531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.058546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 10:28:40 -- common/autotest_common.sh@819 -- # '[' -z 501469 ']' 00:33:26.930 [2024-04-24 10:28:40.058841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 10:28:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.930 [2024-04-24 10:28:40.059135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.059154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 10:28:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:26.930 [2024-04-24 10:28:40.059391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 10:28:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.930 [2024-04-24 10:28:40.059660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.059676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 10:28:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:26.930 [2024-04-24 10:28:40.059896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 10:28:40 -- common/autotest_common.sh@10 -- # set +x 00:33:26.930 [2024-04-24 10:28:40.060166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.060182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.060479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.060773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.060787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.061099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.061391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.061405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.061574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.061798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.061812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.062037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.062285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.062300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.062524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.062736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.062750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.062969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.063206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.063221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.063397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.063636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.063650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.063969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.064129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.064144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.064370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.064638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.064653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.064804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.065075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.065090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.065341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.065514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.065528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.930 qpair failed and we were unable to recover it. 00:33:26.930 [2024-04-24 10:28:40.065761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.066050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.930 [2024-04-24 10:28:40.066080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.066395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.066595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.066611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.066875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.067125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.067139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.067408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.067560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.067573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.067797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.067965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.067979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.068273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.068431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.068445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.068674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.068941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.068956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.069282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.069516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.069529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.069817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.070090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.070104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.070323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.070528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.070543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.070755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.071023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.071036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.071302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.071552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.071565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.071863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.072156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.072171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.072341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.072659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.072672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.072996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.073296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.073310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.073611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.073842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.073855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.074065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.074345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.074359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.074571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.074816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.074830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.075065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.075306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.075320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.075459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.075672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.075685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.075931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.076235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.076248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.076470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.076692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.076706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.931 qpair failed and we were unable to recover it. 00:33:26.931 [2024-04-24 10:28:40.076977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.931 [2024-04-24 10:28:40.077196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.077210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.077397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.077686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.077700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.077931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.078208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.078222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.078506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.078746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.078763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.079067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.079261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.079275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.079575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.079798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.079811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.080107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.080335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.080349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.080579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.080803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.080816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.081047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.081249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.081263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.081535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.081768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.081781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.082075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.082319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.082332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.082580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.082871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.082884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.083181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.083475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.083488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.083749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.084016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.084029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.084196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.084488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.084501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.084720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.084987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.085001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.085170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.085394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.085407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.085570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.085789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.085802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.086024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.086181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.086195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.086363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.086590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.086603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.086814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.087031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.087044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.087192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.087458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.087472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.087637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.087793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.087806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.088082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.088424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.088438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.088685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.088848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.088861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.089067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.089227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.089239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.089542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.089703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.089716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.089879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.090085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.090100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.090321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.090544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.090558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.932 qpair failed and we were unable to recover it. 00:33:26.932 [2024-04-24 10:28:40.090763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.932 [2024-04-24 10:28:40.090930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.090943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.091149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.091421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.091435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.091655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.091869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.091882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.092110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.092326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.092339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.092493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.092740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.092753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.092964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.093236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.093250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.093406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.093555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.093568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.093782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.093991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.094004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.094173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.094322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.094336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.094636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.094818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.094847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.095114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.095298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.095328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.095649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.095849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.095878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.096134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.096249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.096262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.096501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.096801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.096830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.097064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.097375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.097403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.097597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.097853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.097882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.098056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.098316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.098346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.098534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.098852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.098880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.099113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.099305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.099335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.099520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.099752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.099781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.100114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.100420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.100448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.100732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.100922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.100951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.101230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.101483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.101511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.101767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.102015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.102044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.102251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.102507] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:26.933 [2024-04-24 10:28:40.102557] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.933 [2024-04-24 10:28:40.102579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.102608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.102945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.103196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.103227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.103486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.103726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.103740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.103905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.104125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.104155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.933 qpair failed and we were unable to recover it. 00:33:26.933 [2024-04-24 10:28:40.104350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.104605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.933 [2024-04-24 10:28:40.104634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.104951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.105125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.105139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.105369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.105586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.105599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.105758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.105932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.105962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.106249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.106525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.106554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.106701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.107020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.107048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.107323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.107643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.107673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.108012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.108259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.108289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.108479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.108732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.108760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.109000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.109273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.109304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.109509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.109753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.109782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.110066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.110257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.110286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.110541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.110699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.110728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.110918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.111148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.111178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.111366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.111556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.111586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.111938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.112132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.112162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.112359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.112550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.112563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.112776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.112995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.113024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.113376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.113561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.113591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.113848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.114164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.114194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.114320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.114619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.114648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.114836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.115031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.115060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.115338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.115591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.115620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.115926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.116109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.116140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.116471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.116722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.116751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.117004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.117194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.117225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.117422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.117569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.117597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.117849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.118138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.118170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.118369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.118693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.118722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.934 [2024-04-24 10:28:40.118973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.119108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.934 [2024-04-24 10:28:40.119139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.934 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.119474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.119661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.119675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.119921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.120164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.120195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.120504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.120713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.120726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.120900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.121082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.121112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.121369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.121644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.121673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.121925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.122224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.122254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.122581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.122924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.122953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.123102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.123334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.123368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.123632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.123875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.123906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.124163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.124348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.124378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.124644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.124802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.124816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.125027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.125248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.125278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.125471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.125701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.125730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.125928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.126121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.126152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.126401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.126556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.126585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.126894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.127103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.127135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.127396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.127557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.127570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.127873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.128120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.128151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.128498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.128696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.128724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.128979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.129214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.129243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.129483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.129671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.129699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.129955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.130133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.130163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.130352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.130649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.130678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.935 [2024-04-24 10:28:40.131015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.131288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.935 [2024-04-24 10:28:40.131302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.935 qpair failed and we were unable to recover it. 00:33:26.935 [2024-04-24 10:28:40.131467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.131619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.131648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.131784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.131979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.132008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.132290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.132428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.132457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.132778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.133018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.133047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.133269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.133463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.133492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.133722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.133875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.133888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.134111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.134319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.134332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.134488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.134655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.134669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.134847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.134994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.135007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.135250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.135397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.135410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.135636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.135905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.135918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.136189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.136405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.136418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.136622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.136890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.136904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.137174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.137325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.137338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.137572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.137803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.137816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.137969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.138164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.138175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.138265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.138395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.138405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.138528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.138748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.138768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.139029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.139188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.139199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.139351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.139555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.139564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.139778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.139980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.139990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.140130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.140344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.140354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.140567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.140776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.140785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.141000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.141264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.141273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.141407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.141550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.141560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.141769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.141918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.141928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.142123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.142231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.142241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.142387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.142517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.142527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.142668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.142864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.936 [2024-04-24 10:28:40.142874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.936 qpair failed and we were unable to recover it. 00:33:26.936 [2024-04-24 10:28:40.143078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.143317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.143326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.143466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.143680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.143689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.143949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.144199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.144209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.144343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.144499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.144508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.144661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.144948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.144957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.145171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.145315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.145325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.145431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.145719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.145729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.145941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.146083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.146093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.146180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.146442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.146451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.146586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.146736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.146745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.146896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.147099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.147110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.147415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.147626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.147635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.147833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.148095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.148105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.148279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.148560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.148570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.148779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.148921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.148931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.149091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.149298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.149308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.149508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.149718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.149727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.149936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.150199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.150209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.150444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.150661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.150671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.150956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.151157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.151167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.151464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.151619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.151628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.151853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.151988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.151997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.152153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.152296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.152306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.152450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.152710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.152720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.152925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.153185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.153195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.153332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.153535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.153544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.153812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.154019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.154029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.154297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.154415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.154424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.937 [2024-04-24 10:28:40.154586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.154740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.937 [2024-04-24 10:28:40.154749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.937 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.154902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.155113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.155123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.155418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.155618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.155627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.155841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.156054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.156064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.156354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.156551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.156560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.156776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.156971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.156981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.157193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.157455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.157465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.157764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.157959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.157971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.158182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.158341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.158350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.158563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.158771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.158781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.159023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.159220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.159230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.159370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.159505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.159515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.159778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.159937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.159946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.160214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.160416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.160426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.160654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.160802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.160812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.161078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.161260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.161269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.161466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.161673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.161682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.161969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.162181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.162193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.162404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.162691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.162701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.162991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.163215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.163225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.163443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.163650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.163659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.163866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.164136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.164146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.164363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.164497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.164507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.164768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.165025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.165036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.165170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.165305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.165314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.165516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.165710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.165720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.165924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.166066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.166080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.166223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.166494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.166508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.166768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.166916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.166926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.938 qpair failed and we were unable to recover it. 00:33:26.938 [2024-04-24 10:28:40.167089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.938 [2024-04-24 10:28:40.167303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.167312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.167466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.167674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.167683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.167887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.168091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.168111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.168278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.168476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.168486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.168689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.168815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.168825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.169031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.169178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.169189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.169422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.169604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.169614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.169741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.169954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.169964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.170149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.170290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.170302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.170510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.170714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.170724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.170949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.171209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.171220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.171437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.171699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.171708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.171866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.172141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.172151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.172370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.172597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.172608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.172799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.173102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.173112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.173316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.173478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.173487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.173703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.173839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.173849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.174065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.174332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.174342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.174502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.174640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.174652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.174786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.174971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.174981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.175127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.175235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.175246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.175508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.175808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.175818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.175966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.176172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.176182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.176418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.176649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.176659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.176868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.177100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.177111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.177328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.177530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.177540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.177672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:26.939 [2024-04-24 10:28:40.177839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.177987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.177997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.178146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.178417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.178427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.178566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.178784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.939 [2024-04-24 10:28:40.178797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.939 qpair failed and we were unable to recover it. 00:33:26.939 [2024-04-24 10:28:40.179009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.179167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.179178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.940 qpair failed and we were unable to recover it. 00:33:26.940 [2024-04-24 10:28:40.179397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.179600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.179611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.940 qpair failed and we were unable to recover it. 00:33:26.940 [2024-04-24 10:28:40.179756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.179890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.179900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.940 qpair failed and we were unable to recover it. 00:33:26.940 [2024-04-24 10:28:40.180187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.180399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.180408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.940 qpair failed and we were unable to recover it. 00:33:26.940 [2024-04-24 10:28:40.180607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.180840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.180850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.940 qpair failed and we were unable to recover it. 00:33:26.940 [2024-04-24 10:28:40.181065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.181343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.181353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.940 qpair failed and we were unable to recover it. 00:33:26.940 [2024-04-24 10:28:40.181508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.181716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.181726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.940 qpair failed and we were unable to recover it. 00:33:26.940 [2024-04-24 10:28:40.181935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.182176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.182187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.940 qpair failed and we were unable to recover it. 00:33:26.940 [2024-04-24 10:28:40.182456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.182601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.940 [2024-04-24 10:28:40.182611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:26.940 qpair failed and we were unable to recover it. 00:33:27.220 [2024-04-24 10:28:40.182771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.220 [2024-04-24 10:28:40.182976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.220 [2024-04-24 10:28:40.182989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.220 qpair failed and we were unable to recover it. 00:33:27.220 [2024-04-24 10:28:40.183204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.220 [2024-04-24 10:28:40.183342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.220 [2024-04-24 10:28:40.183353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.220 qpair failed and we were unable to recover it. 00:33:27.220 [2024-04-24 10:28:40.183507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.220 [2024-04-24 10:28:40.183727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.183738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.183946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.184159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.184170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.184322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.184594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.184605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.184804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.185039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.185050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.185289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.185506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.185516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.185713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.185933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.185944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.186099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.186360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.186371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.186578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.186734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.186744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.186964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.187157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.187168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.187459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.187746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.187757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.188044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.188199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.188210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.188365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.188562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.188572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.188779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.188985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.188995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.189158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.189355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.189365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.189515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.189647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.189657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.189919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.190133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.190144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.190370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.190675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.190685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.190964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.191117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.191129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.191287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.191586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.191596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.191744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.191917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.191927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.192234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.192428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.192439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.192655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.192945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.192955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.193109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.193369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.193379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.193665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.193866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.193876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.193952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.194159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.194169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.194381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.194518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.194529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.194759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.194973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.194983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.195211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.195364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.195374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.221 [2024-04-24 10:28:40.195649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.195841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.221 [2024-04-24 10:28:40.195851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.221 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.196068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.196338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.196348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.196574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.196721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.196731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.196996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.197140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.197151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.197349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.197474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.197483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.197703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.197894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.197904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.198059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.198141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.198152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.198229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.198374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.198383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.198536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.198746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.198757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.198966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.199132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.199143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.199294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.199489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.199499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.199576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.199843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.199853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.200125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.200277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.200287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.200442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.200646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.200656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.200751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.200964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.200974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.201128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.201331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.201340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.201552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.201708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.201717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.201929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.202067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.202081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.202237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.202436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.202446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.202715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.202926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.202935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.203080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.203226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.203237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.203393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.203678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.203688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.203846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.204092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.204103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.204332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.204546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.204556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.204819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.204990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.204999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.205217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.205354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.205365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.205514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.205710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.205720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.205875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.206016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.206026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.206317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.206603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.206613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.222 qpair failed and we were unable to recover it. 00:33:27.222 [2024-04-24 10:28:40.206829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.207023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.222 [2024-04-24 10:28:40.207033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.207190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.207450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.207459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.207672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.207878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.207888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.208101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.208324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.208334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.208488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.208714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.208724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.208891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.209046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.209056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.209197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.209406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.209416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.209702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.209987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.209997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.210236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.210504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.210514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.210672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.210875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.210885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.211084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.211296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.211306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.211389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.211595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.211604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.211807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.212042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.212052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.212276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.212435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.212445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.212644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.212858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.212868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.213068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.213224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.213234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.213461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.213724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.213738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.213954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.214168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.214179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.214338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.214476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.214487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.214786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.214998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.215010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.215157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.215294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.215305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.215503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.215711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.215723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.215980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.216247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.216266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.216528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.216779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.216797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.216996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.217234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.217252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.217378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.217648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.217662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.217818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.217964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.217977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.218196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.218418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.218434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.218708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.218946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.223 [2024-04-24 10:28:40.218960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.223 qpair failed and we were unable to recover it. 00:33:27.223 [2024-04-24 10:28:40.219182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.219450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.219463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.219687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.219824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.219838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.220138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.220378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.220392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.220605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.220818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.220832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.221112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.221322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.221335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.221548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.221804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.221818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.222027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.222241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.222255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.222498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.222704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.222717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.222966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.223169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.223183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.223348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.223500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.223513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.223737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.224011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.224025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.224243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.224447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.224459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.224693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.224914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.224927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.225147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.225353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.225367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.225513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.225684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.225697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.226021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.226264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.226278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.226492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.226716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.226729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.226914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.227019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.227032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.227134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.227304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.227317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.227454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.227664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.227677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.227881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.228030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.228043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.228198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.228502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.228516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.228679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.228834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.228847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.229080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.229235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.229249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.229405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.229569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.229582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.229800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.230035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.230048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.230261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.230583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.230598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.230707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.230869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.230882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.224 [2024-04-24 10:28:40.231168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.231377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.224 [2024-04-24 10:28:40.231390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.224 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.231600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.231873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.231886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.232157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.232316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.232329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.232532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.232758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.232773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.232921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.233079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.233092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.233266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.233476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.233489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.233630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.233779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.233793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.234020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.234251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.234265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.234486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.234690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.234702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.234906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.235049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.235063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.235292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.235499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.235513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.225 qpair failed and we were unable to recover it. 00:33:27.225 [2024-04-24 10:28:40.235659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.225 [2024-04-24 10:28:40.235817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.235830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.235981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.236249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.236263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.236426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.236718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.236732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.236999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.237152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.237166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.237301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.237520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.237536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.237766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.238034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.238048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.238268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.238471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.238484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.238691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.238842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.238854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.239076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.239290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.239304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.239462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.239735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.239748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.239897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.240051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.240063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.240233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.240455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.240469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.240603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.240884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.240901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.241132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.241348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.241362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.241591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.241790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.241806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.242033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.242234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.242249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.242405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.242606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.242620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.242833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.243035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.243048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.243324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.243481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.243499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.243722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.243872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.243885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.244102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.244304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.244317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.244560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.244721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.244734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.244938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.245204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.245218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.245430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.245631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.245648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.245870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.246018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.246034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.246257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.246462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.226 [2024-04-24 10:28:40.246476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.226 qpair failed and we were unable to recover it. 00:33:27.226 [2024-04-24 10:28:40.246633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.246862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.246881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.247091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.247326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.247339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.247638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.247884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.247898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.248175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.248399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.248412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.248562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.248777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.248790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.249135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.249411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.249427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.249644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.249863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.249877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.250116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.250414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.250430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.250716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.250941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.250960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.251002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:27.227 [2024-04-24 10:28:40.251119] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.227 [2024-04-24 10:28:40.251127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.227 [2024-04-24 10:28:40.251133] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.227 [2024-04-24 10:28:40.251278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.251241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:27.227 [2024-04-24 10:28:40.251347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:27.227 [2024-04-24 10:28:40.251430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:27.227 [2024-04-24 10:28:40.251499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.251513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.251431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:27.227 [2024-04-24 10:28:40.251728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.252033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.252047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.252281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.252554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.252568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.252791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.253110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.253125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.253371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.253594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.253607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.253854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.254029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.254042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.254316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.254545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.254559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.254774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.255015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.255033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.255276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.255490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.255505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.255784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.255951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.255964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.256165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.256432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.256445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.256751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.257038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.257052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.257306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.257582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.257602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.257962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.258249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.258264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.258488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.258763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.258779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.258953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.259164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.259177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.259404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.259705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.259719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.227 qpair failed and we were unable to recover it. 00:33:27.227 [2024-04-24 10:28:40.259954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.260242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.227 [2024-04-24 10:28:40.260262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.260481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.260766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.260780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.261016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.261238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.261252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.261473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.261708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.261723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.261975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.262206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.262222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.262471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.262768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.262782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.262951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.263276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.263292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.263466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.263692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.263706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.263926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.264148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.264165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.264441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.264659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.264674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.264977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.265200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.265216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.265492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.265737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.265752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.266047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.266365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.266382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.266609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.266859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.266874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.267116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.267447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.267463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.267760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.268016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.268031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.268208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.268423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.268442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.268739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.269035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.269050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.269306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.269524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.269541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.269817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.270018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.270033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.270338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.270634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.270651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.270938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.271187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.271201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.271489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.271760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.271775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.272082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.272375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.272390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.272557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.272772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.272787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.273078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.273282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.273295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.273523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.273750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.273763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.274063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.274350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.274365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.228 [2024-04-24 10:28:40.274664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.274898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.228 [2024-04-24 10:28:40.274915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.228 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.275239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.275409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.275423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.275639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.275867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.275881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.276106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.276378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.276392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.276612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.276924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.276938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.277144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.277419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.277434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.277734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.278004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.278018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.278324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.278668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.278681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.278911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.279141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.279155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.279470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.279696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.279709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.280028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.280341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.280359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.280550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.280756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.280770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.281067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.281390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.281408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.281560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.281734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.281748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.282035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.282268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.282283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.282505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.282781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.282795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.283117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.283424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.283438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.283745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.284002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.284016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.284291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.284572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.284586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.284888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.285204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.285218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.285446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.285719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.285736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.285921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.286242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.286258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.286477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.286746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.286760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.287002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.287223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.287238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.287445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.287770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.287784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.288105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.288345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.288358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.288597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.288907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.288920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.289210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.289529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.289542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.289719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.289989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.290002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.229 [2024-04-24 10:28:40.290258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.290481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.229 [2024-04-24 10:28:40.290495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.229 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.290729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.291024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.291036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.291312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.291519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.291532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.291735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.292011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.292024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.292329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.292572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.292586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.292751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.293035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.293048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.293344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.293570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.293584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.293866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.294099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.294113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.294401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.294637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.294652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.294873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.295096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.295111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.295334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.295531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.295547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.295820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.296090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.296106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.296383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.296602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.296618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.296919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.297213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.297229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.297397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.297692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.297708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.298007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.298313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.298329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.298604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.298804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.298819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.299137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.299310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.299323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.299557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.299720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.299733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.299898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.300115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.300130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.300428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.300735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.300748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.300999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.301319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.301334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.301606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.301896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.301909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.302148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.302436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.302449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.302826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.303149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.230 [2024-04-24 10:28:40.303170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.230 qpair failed and we were unable to recover it. 00:33:27.230 [2024-04-24 10:28:40.303480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.303797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.303808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.303979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.304306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.304316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.304480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.304782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.304792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.305065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.305307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.305316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.305518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.305732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.305741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.305951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.306218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.306228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.306432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.306638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.306647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.306864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.307116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.307126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.307413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.307627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.307636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.307847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.308112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.308122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.308438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.308698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.308707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.309000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.309260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.309270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.309561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.309847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.309857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.310077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.310294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.310304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.310599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.310805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.310814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.310977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.311238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.311248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.311513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.311786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.311796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.312078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.312394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.312404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.312683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.312931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.312940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.313089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.313291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.313301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.313512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.313798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.313807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.314106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.314303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.314313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.314588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.314749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.314758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.315035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.315190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.315201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.315463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.315697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.315706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.316004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.316244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.316253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.316566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.316754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.316763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.317024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.317307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.317317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.317481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.317711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.231 [2024-04-24 10:28:40.317720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.231 qpair failed and we were unable to recover it. 00:33:27.231 [2024-04-24 10:28:40.317925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.318218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.318228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.318373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.318577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.318586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.318825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.319111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.319121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.319401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.319684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.319693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.319904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.320161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.320171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.320462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.320727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.320736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.321000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.321141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.321151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.321423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.321731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.321741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.322010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.322282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.322291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.322506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.322812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.322822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.323038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.323328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.323338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.323547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.323796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.323805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.324020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.324325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.324334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.324579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.324742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.324752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.324955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.325262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.325272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.325485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.325692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.325701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.325974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.326228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.326237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.326456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.326709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.326719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.326915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.327176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.327186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.327381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.327681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.327691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.327912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.328198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.328211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.328530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.328768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.328777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.328984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.329198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.329208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.329500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.329789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.329798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.330025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.330256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.330266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.330503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.330709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.330719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.330920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.331206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.331216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.331430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.331724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.331733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.232 [2024-04-24 10:28:40.331993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.332275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.232 [2024-04-24 10:28:40.332285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.232 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.332518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.332792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.332801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.333063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.333329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.333341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.333625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.333786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.333795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.334059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.334312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.334322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.334536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.334698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.334707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.334915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.335228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.335238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.335447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.335719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.335728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.336007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.336270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.336279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.336567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.336799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.336808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.337004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.337288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.337297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.337456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.337671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.337680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.337885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.338078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.338090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.338337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.338598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.338608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.338898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.339183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.339193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.339464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.339750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.339759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.340026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.340317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.340327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.340613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.340774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.340783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.341000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.341237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.341247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.341466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.341746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.341755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.342055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.342272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.342282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.342566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.342802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.342811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.343105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.343312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.343321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.343518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.343819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.343829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.344097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.344313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.344323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.344596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.344918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.344927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.345218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.345448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.345457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.345731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.345943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.345952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.346164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.346368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.346377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.233 qpair failed and we were unable to recover it. 00:33:27.233 [2024-04-24 10:28:40.346642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.233 [2024-04-24 10:28:40.346954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.346964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.347119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.347364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.347373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.347612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.347760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.347769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.347968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.348257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.348267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.348508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.348746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.348755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.348973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.349247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.349257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.349460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.349681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.349691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.349904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.350200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.350209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.350515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.350743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.350753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.351044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.351252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.351262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.351524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.351786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.351796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.352083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.352293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.352302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.352580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.352795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.352805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.352957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.353272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.353282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.353647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.353910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.353919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.354142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.354362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.354371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.354658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.354944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.354954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.355175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.355410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.355420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.355630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.355823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.355832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.356064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.356217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.356226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.356434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.356694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.356703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.357004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.357269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.357279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.357595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.357874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.357883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.358119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.358384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.358393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.358684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.358950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.358959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.359228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.359488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.359498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.359763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.360036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.360045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.360197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.360459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.360469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.234 [2024-04-24 10:28:40.360673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.360981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.234 [2024-04-24 10:28:40.360990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.234 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.361259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.361469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.361478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.361755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.362015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.362024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.362239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.362449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.362458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.362703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.362973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.362982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.363195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.363454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.363463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.363718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.363995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.364004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.364205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.364418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.364427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.364690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.364923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.364932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.365218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.365496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.365505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.365779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.366102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.366111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.366393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.366572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.366581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.366741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.367028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.367038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.367265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.367542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.367552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.367763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.368067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.368081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.368306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.368602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.368611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.368907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.369224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.369234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.369495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.369801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.369810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.370010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.370218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.370228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.370509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.370811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.370820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.370973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.371144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.371154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.371350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.371610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.371619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.371936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.372216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.372226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.372432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.372731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.372740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.235 qpair failed and we were unable to recover it. 00:33:27.235 [2024-04-24 10:28:40.373028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.235 [2024-04-24 10:28:40.373260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.373270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.373542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.373696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.373705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.374006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.374240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.374250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.374564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.374864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.374874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.375078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.375278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.375287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.375494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.375689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.375698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.375982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.376246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.376256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.376465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.376692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.376701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.376858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.377179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.377189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.377469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.377770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.377779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.378072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.378323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.378332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.378489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.378695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.378705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.379026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.379240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.379249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.379522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.379829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.379838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.380033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.380260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.380269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.380511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.380790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.380800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.381076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.381361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.381370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.381582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.381771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.381781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.382085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.382307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.382317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.382553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.382776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.382785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.383003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.383218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.383228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.383370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.383574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.383584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.383804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.384066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.384079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.384355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.384627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.384636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.384851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.385067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.385081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.385280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.385493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.385503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.385652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.385954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.385963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.386254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.386492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.386502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.386786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.387072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.236 [2024-04-24 10:28:40.387081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.236 qpair failed and we were unable to recover it. 00:33:27.236 [2024-04-24 10:28:40.387375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.387527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.387536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.387742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.387947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.387956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.388243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.388542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.388552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.388837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.389033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.389042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.389197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.389480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.389489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.389751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.389890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.389900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.390199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.390458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.390467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.390778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.391037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.391046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.391322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.391532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.391542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.391850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.392122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.392131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.392425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.392648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.392657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.392879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.393166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.393175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.393446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.393668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.393678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.393941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.394184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.394194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.394339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.394626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.394635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.394923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.395157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.395167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.395439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.395676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.395685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.395988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.396248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.396257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.396540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.396800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.396809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.397075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.397347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.397356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.397642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.397846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.397856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.398119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.398406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.398415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.398611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.398892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.398901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.399138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.399424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.399434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.399648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.399957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.399967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.400224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.400381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.400390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.400700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.400915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.400924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.401157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.401379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.401388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.237 [2024-04-24 10:28:40.401609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.401896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.237 [2024-04-24 10:28:40.401906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.237 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.402126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.402290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.402299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.402561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.402706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.402715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.402965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.403249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.403259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.403480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.403639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.403648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.403850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.404074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.404086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.404295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.404552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.404561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.404850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.405082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.405091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.405326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.405519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.405529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.405678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.405886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.405895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.406108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.406414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.406423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.406642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.406910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.406919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.407145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.407414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.407424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.407620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.407838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.407848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.407988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.408212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.408222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.408487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.408693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.408704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.408928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.409136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.409146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.409431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.409565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.409574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.409749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.409919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.409928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.410220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.410521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.410531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.410861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.411122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.411131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.411457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.411741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.411751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.412045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.412240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.412249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.412448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.412639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.412648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.412897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.413156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.413165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.413445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.413654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.413665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.413963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.414244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.414254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.414532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.414762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.414772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.415016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.415229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.415238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.238 qpair failed and we were unable to recover it. 00:33:27.238 [2024-04-24 10:28:40.415499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.415843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.238 [2024-04-24 10:28:40.415852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.416144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.416387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.416397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.416602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.416886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.416895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.417172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.417433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.417442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.417657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.417918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.417928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.418215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.418462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.418472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.418743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.419025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.419037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.419323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.419530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.419539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.419745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.419889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.419899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.420131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.420420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.420430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.420652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.420930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.420939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.421075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.421315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.421325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.421611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.421837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.421846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.422130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.422411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.422420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.422714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.422937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.422947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.423166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.423433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.423443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.423715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.423908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.423918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.424163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.424360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.424369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.424580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.424817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.424826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.425064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.425271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.425281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.425523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.425715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.425725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.426036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.429349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.429360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.429595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.429880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.429889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.430172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.430476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.430485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.430748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.430973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.239 [2024-04-24 10:28:40.430983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.239 qpair failed and we were unable to recover it. 00:33:27.239 [2024-04-24 10:28:40.431222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.431503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.431512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.431794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.432026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.432035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.432251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.432457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.432467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.432728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.432997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.433007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.433295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.433525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.433534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.433810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.434096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.434106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.434393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.434603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.434612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.434900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.435181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.435191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.435473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.435702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.435711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.435936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.436222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.436232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.436473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.436733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.436743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.437005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.437279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.437289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.437486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.437775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.437785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.438082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.438365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.438374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.438575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.438793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.438802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.439091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.439303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.439312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.439524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.439754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.439763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.440031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.440327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.440337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.440617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.440922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.440931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.441226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.441383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.441393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.441651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.441923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.441933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.442180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.442372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.442381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.442687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.442917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.442927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.443190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.443413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.443422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.443632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.443836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.443845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.444043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.444311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.444322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.444563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.444762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.444771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.445057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.445301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.445311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.240 [2024-04-24 10:28:40.445554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.445728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.240 [2024-04-24 10:28:40.445737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.240 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.445988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.446146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.446156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.446420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.446728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.446738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.446955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.447162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.447171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.447348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.447541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.447550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.447759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.448042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.448051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.448355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.448618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.448628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.448912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.449199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.449209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.449487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.449764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.449774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.450013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.450229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.450238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.450530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.450805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.450814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.451045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.451334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.451343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.451583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.451799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.451808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.452092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.452306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.452316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.452556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.452795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.452804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.453013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.453302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.453312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.453509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.453810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.453819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.454115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.454384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.454394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.454644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.454855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.454865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.455103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.455247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.455256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.455409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.455676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.455685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.455908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.456192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.456202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.456479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.456630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.456639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.456909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.457063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.457075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.457317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.457604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.457613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.457776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.457986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.457995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.458266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.458427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.458437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.458714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.458909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.458918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.459147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.459312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.459321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.241 qpair failed and we were unable to recover it. 00:33:27.241 [2024-04-24 10:28:40.459472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.459680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.241 [2024-04-24 10:28:40.459689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.459952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.460166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.460175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.460432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.460706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.460715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.460990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.461290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.461300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.461516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.461648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.461658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.461816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.462024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.462033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.462311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.462571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.462580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.462868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.463080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.463090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.463242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.463434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.463444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.463655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.463857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.463866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.464154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.464442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.464452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.464728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.464953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.464962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.465249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.465529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.465538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.465689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.465949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.465958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.466246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.466449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.466459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.466727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.466937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.466947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.467168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.467311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.467320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.467536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.467686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.467696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.467888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.468152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.468161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.468399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.468607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.468616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.468746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.469006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.469015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.469301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.469554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.469563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.469852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.470063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.470078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.470298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.470494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.470503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.470766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.471050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.471060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.471327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.471460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.471469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.242 [2024-04-24 10:28:40.471674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.471953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.242 [2024-04-24 10:28:40.471963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.242 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.472172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.472464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.472473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.472734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.472940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.472949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.473160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.473361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.473371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.473582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.473776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.473785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.474053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.474251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.474261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.474536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.474742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.474751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.475037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.475275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.475285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.475502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.475736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.475746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.475968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.476262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.476272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.476479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.476702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.476712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.476995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.477148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.477157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.477446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.477680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-04-24 10:28:40.477689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.243 qpair failed and we were unable to recover it. 00:33:27.243 [2024-04-24 10:28:40.477889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.478177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.478187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.478389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.478550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.478559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.478716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.478992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.479001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.479141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.479410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.479419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.479635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.479848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.479858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.480140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.480409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.480419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.480632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.480941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.480951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.481170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.481457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.481466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.481633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.481865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.481874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.482102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.482386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.482396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.482626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.482900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.482909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.483062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.483351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.483360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.509 qpair failed and we were unable to recover it. 00:33:27.509 [2024-04-24 10:28:40.483569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.509 [2024-04-24 10:28:40.483864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.483874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.484068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.484380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.484389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.484624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.484831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.484840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.485128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.485354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.485364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.485647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.485940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.485951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.486239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.486504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.486514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.486709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.486997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.487006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.487233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.487443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.487452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.487616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.487825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.487834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.488134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.488349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.488359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.488578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.488795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.488804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.489029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.489284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.489294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.489533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.489730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.489739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.489937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.490197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.490206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.490413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.490619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.490631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.490823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.491104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.491114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.491381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.491541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.491550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.491745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.491942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.491952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.492184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.492444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.492453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.492691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.492908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.492918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.493115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.493336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.493346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.493632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.493787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.493796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.494084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.494276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.494285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.494564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.494773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.494782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.495094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.495367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.495378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.495612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.495921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.495930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.496088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.496303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.496312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.496520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.496790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.496799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.497015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.497321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.510 [2024-04-24 10:28:40.497331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.510 qpair failed and we were unable to recover it. 00:33:27.510 [2024-04-24 10:28:40.497557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.497788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.497797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.498060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.498281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.498291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.498593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.498790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.498799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.499004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.499163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.499173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.499469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.499703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.499712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.499924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.500231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.500242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.500539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.500750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.500760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.500970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.501237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.501247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.501466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.501670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.501680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.501919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.502085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.502095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.502381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.502574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.502583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.502795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.503077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.503087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.503380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.503593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.503603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.503886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.504133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.504142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.504437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.504723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.504733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.505006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.505295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.505305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.505595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.505837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.505847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.506117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.506337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.506347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.506616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.506917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.506927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.507216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.507493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.507502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.507784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.508088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.508098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.508327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.508539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.508548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.508699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.508895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.508904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.509221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.509459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.509468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.509676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.509943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.509953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.510173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.510395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.510404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.510654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.510995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.511005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.511216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.511419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.511428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.511 [2024-04-24 10:28:40.511692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.511939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.511 [2024-04-24 10:28:40.511948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.511 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.512157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.512460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.512469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.512678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.513000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.513009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.513250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.513444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.513454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.513651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.513802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.513811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.514125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.514401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.514410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.514704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.514843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.514853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.515118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.515402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.515412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.515613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.515871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.515880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.516101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.516255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.516265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.516473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.516735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.516744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.516958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.517172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.517183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.517335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.517547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.517557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.517883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.518118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.518127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.518337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.518629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.518638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.518925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.519194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.519203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.519466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.519751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.519760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.519993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.520296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.520306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.520593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.520881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.520890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.521122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.521334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.521343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.521628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.521835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.521844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.522057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.522208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.522227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.522444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.522733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.522742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.522979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.523261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.523271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.523586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.523864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.523873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.524010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.524280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.524290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.524526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.524733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.524742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.525035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.525194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.525204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.512 qpair failed and we were unable to recover it. 00:33:27.512 [2024-04-24 10:28:40.525430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.512 [2024-04-24 10:28:40.525585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.525594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.525890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.526095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.526105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.526342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.526602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.526611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.526896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.527109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.527119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.527336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.527595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.527604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.527915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.528195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.528205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.528483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.528678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.528687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.528918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.529129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.529139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.529309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.529513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.529522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.529820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.530116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.530126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.530411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.530667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.530676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.530983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.531265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.531275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.531575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.531833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.531842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.532052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.532275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.532285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.532545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.532774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.532784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.533000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.533208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.533218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.533495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.533724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.533734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.534023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.534231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.534241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.534482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.534771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.534780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.535054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.535303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.535313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.535555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.535895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.535912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.536220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.536465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.536481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.536714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.536947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.536960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.537256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.537538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.537551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.537714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.537952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.537965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.538265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.538536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.538550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.538776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.539095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.539109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.539328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.539606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.539619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.539769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.540056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.540069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.513 qpair failed and we were unable to recover it. 00:33:27.513 [2024-04-24 10:28:40.540362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.513 [2024-04-24 10:28:40.540677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.540690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.540991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.541300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.541316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.541613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.541917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.541930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.542237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.542470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.542484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.542802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.542972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.542985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.543281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.543560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.543574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.543857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.544181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.544196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.544419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.544632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.544645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.544914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.545216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.545231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.545525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.545750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.545763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.545999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.546291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.546304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.546534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.546805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.546818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.547041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.547252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.547266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.547490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.547780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.547793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.548062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.548372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.548386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.548607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.548881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.548895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.549163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.549480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.549493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.549725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.550019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.550032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.550330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.550636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.550649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.550961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.551259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.551273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.551491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.551729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.551743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.552031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.552246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.552263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.552478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.552768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.552781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.514 [2024-04-24 10:28:40.553076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.553375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.514 [2024-04-24 10:28:40.553388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.514 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.553671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.553940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.553953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.554175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.554392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.554406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.554627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.554853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.554866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.555165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.555484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.555498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.555739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.556034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.556047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.556328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.556603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.556616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.556911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.557146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.557160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.557376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.557667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.557680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.557970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.558258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.558272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.558545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.558764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.558777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.559045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.559310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.559324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.559573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.559807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.559820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.560090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.560376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.560389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.560657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.560878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.560891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.561161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.561468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.561481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.561812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.562081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.562095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.562404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.562572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.562585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.562787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.563056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.563072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.563296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.563511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.563524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.563822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.563956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.563969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.564184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.564475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.564488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.564774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.565053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.565066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.565371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.565690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.565703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.565934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.566204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.566218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.566438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.566656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.566669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.566965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.567247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.567261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.567534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.567829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.567842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.568134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.568366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.568379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.568585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.568851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.515 [2024-04-24 10:28:40.568865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.515 qpair failed and we were unable to recover it. 00:33:27.515 [2024-04-24 10:28:40.569094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.569390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.569403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.569700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.570006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.570020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.570265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.570559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.570572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.570818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.571057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.571073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.571376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.571679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.571692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.571989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.572221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.572235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.572496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.572661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.572675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.572977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.573192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.573206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.573498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.573656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.573669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.574015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.574244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.574261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.574471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.574682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.574695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.574912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.575204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.575218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.575488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.575723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.575735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.576028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.576184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.576198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.576498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.576791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.576804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.577101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.577394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.577407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.577653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.577927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.577940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.578143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.578455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.578469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.578775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.578999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.579012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.579307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.579541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.579554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.579725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.580003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.580016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.580307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.580528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.580541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.580760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.581053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.581066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.581291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.581500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.581513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.581734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.581997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.582010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.582231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.582537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.582550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.582772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.583050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.583063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.583281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.583502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.583515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.516 [2024-04-24 10:28:40.583732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.584051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.516 [2024-04-24 10:28:40.584064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.516 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.584283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.584525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.584538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.584871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.585163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.585177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.585473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.585786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.585799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.586094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.586364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.586377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.586610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.586905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.586918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.587138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.587304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.587317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.587610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.587822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.587835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.588128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.588419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.588432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.588727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.588991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.589004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.589297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.589462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.589475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.589768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.589980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.589993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.590330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.590562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.590572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.590871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.591147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.591157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.591460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.591743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.591752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.592016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.592221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.592231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.592547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.592744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.592753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.592969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.593254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.593264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.593471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.593702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.593711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.593980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.594122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.594131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.594325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.594535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.594544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.594802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.595015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.595025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2af4000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.595327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.595612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.595627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.595872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.596098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.596113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.596386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.596719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.596733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.596889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.597167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.597183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.597434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.597639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.597652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.597857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.598077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.598091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.598362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.598521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.598534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.517 qpair failed and we were unable to recover it. 00:33:27.517 [2024-04-24 10:28:40.598757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.598967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.517 [2024-04-24 10:28:40.598980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.599197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.599407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.599420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.599639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.599849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.599862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.600132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.600369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.600382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.600675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.600990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.601004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.601225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.601508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.601522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.601744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.601966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.601979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.602188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.602400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.602413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.602618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.602767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.602782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.603051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.603263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.603277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.603496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.603773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.603786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.604005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.604249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.604263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.604512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.604661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.604673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.604914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.605135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.605150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.605314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.605559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.605573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.605910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.606133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.606147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.606422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.606635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.606648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.606900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.607133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.607148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.607448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.607668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.607682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.607987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.608281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.608294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.608469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.608679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.608692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.608901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.609108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.609122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.609425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.609736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.609750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.610057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.610290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.610304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.518 [2024-04-24 10:28:40.610532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.610685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.518 [2024-04-24 10:28:40.610698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.518 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.610977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.611199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.611213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.611434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.611645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.611658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.611824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.612140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.612154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.612432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.612676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.612689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.612857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.613123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.613138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.613368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.613611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.613625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.613951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.614270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.614284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.614453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.614675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.614688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.614955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.615222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.615238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.615416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.615644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.615658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.615921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.616176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.616189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.616395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.616569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.616582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.616797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.617107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.617121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.617360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.617604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.617616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.617898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.618188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.618202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.618448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.618670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.618683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.619013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.619310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.619324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.619567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.619890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.619903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.620148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.620414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.620430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.620650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.620955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.620968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.621267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.621489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.621503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.621724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.621927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.621940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.622180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.622446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.622459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.622763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.622986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.622999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.623224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.623461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.623474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.623696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.623917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.623930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.624253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.624542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.624555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.624852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.625147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.625160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.519 qpair failed and we were unable to recover it. 00:33:27.519 [2024-04-24 10:28:40.625386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.519 [2024-04-24 10:28:40.625584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.625599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.625840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.626185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.626210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.626429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.626721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.626734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.627045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.627301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.627315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.627496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.627657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.627670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.627994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.628272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.628286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.628508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.628783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.628796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.629091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.629318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.629332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.629551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.629840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.629853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.630124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.630360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.630373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.630624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.630939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.630954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.631275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.631496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.631509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.631818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.632057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.632073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.632320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.632619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.632632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.632917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.633189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.633203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.633471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.633709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.633722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.634014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.634306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.634319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.634567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.634800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.634813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.634982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.635249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.635263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.635439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.635652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.635665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.636017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.636303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.636317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.636561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.636710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.636723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.636999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.637149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.637163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.637340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.637618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.637631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.637840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.637992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.638005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.638275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.638432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.638445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.638673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.638836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.638849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.639012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.639223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.639237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.639534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.639751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.639764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.520 qpair failed and we were unable to recover it. 00:33:27.520 [2024-04-24 10:28:40.639917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.520 [2024-04-24 10:28:40.640215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.640228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.640440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.640759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.640772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.641099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.641343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.641356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.641573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.641716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.641729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.642023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.642238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.642252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.642539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.642765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.642777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.643032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.643244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.643258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.643491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.643739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.643752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.644046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.644305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.644319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.644527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.644771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.644784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.645082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.645305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.645318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.645491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.645787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.645800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.646032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.646272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.646285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.646564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.646904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.646917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.647259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.647461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.647473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.647708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.647926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.647939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.648225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.648491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.648504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.648778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.649068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.649096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.649315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.649552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.649565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.649853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.650068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.650086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.650390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.650606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.650619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.650847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.651062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.651080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.651375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.651544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.651557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.651889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.652155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.652169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.652484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.652783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.652795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.653132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.653402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.653415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.653713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.653926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.653939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.654232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.654541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.654554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.654862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.655096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.521 [2024-04-24 10:28:40.655110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.521 qpair failed and we were unable to recover it. 00:33:27.521 [2024-04-24 10:28:40.655405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.655699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.655712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.655924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.656216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.656230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.656547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.656815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.656828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.657037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.657180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.657194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.657460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.657629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.657642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.657939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.658156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.658169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.658450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.658695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.658708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.658983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.659257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.659271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.659500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.659791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.659804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.660081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.660370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.660383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.660617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.660829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.660841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.661137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.661428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.661441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.661761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.661999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.662011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.662329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.662619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.662632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.662838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.663137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.663151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.663423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.663595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.663608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.663877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.664171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.664185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.664407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.664676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.664689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.664963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.665212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.665226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.665377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.665643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.665656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.665952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.666218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.666232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.666529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.666819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.666832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.667077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.667375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.667388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.667615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.667887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.667900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.668124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.668352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.668365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.668586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.668744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.668757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.669068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.669312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.669325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.669570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.669870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.669883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.522 [2024-04-24 10:28:40.670134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.670353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.522 [2024-04-24 10:28:40.670366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.522 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.670597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.670752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.670765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.670981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.671280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.671294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.671593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.671809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.671822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.672039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.672267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.672281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.672532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.672817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.672830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.673080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.673362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.673375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.673673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.673966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.673979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.674203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.674487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.674500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.674701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.674921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.674934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.675226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.675455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.675467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.675695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.675895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.675908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.676202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.676496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.676509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.676657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.676869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.676881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.677177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.677462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.677475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.677750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.677966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.677979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.678204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.678369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.678382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.678607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.678820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.678833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.679175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.679470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.679483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.679755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.679918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.679931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.680144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.680408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.680421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.680637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.680906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.680919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.681228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.681502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.681515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.681768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.682040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.682053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.523 [2024-04-24 10:28:40.682337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.682605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.523 [2024-04-24 10:28:40.682617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.523 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.682920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.683211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.683225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.683458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.683658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.683672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.683909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.684124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.684138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.684422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.684621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.684634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.684867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.685065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.685082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.685284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.685548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.685561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.685880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.686170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.686184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.686491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.686724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.686737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.687059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.687278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.687291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.687507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.687755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.687767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.688088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.688380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.688393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.688679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.688905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.688918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.689188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.689480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.689493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.689786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.690006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.690019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.690242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.690507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.690520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.690734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.690962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.690974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.691209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.691501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.691514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.691765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.692060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.692076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.692294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.692576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.692589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.692749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.693038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.693050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.693299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.693593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.693606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.693813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.694108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.694121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.694329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.694640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.694653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.694820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.695120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.695134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.695401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.695639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.695652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.524 qpair failed and we were unable to recover it. 00:33:27.524 [2024-04-24 10:28:40.695871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.696098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.524 [2024-04-24 10:28:40.696111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.696388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.696669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.696682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.696974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.697197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.697211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.697427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.697643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.697656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.697888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.698216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.698230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.698501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.698796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.698812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.699050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.699347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.699360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.699515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.699780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.699793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.700019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.700218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.700231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.700523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.700824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.700837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.701152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.701440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.701453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.701727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.702015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.702027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.702319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.702561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.702574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.702869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.703174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.703188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.703407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.703692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.703705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.703951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.704217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.704233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.704464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.704685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.704698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.704968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.705182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.705195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.705517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.705831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.705844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.706063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.706355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.706368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.706663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.706957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.706969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.707265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.707576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.707589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.707828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.708146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.708160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.708332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.708630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.708643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.708922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.709208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.709221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.709517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.709729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.709744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.710032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.710326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.710339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.710631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.710839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.710852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.711151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.711436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.711449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.525 qpair failed and we were unable to recover it. 00:33:27.525 [2024-04-24 10:28:40.711769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.525 [2024-04-24 10:28:40.712060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.712076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.712315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.712551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.712564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.712856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.713170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.713184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.713433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.713720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.713733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.714054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.714223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.714236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.714556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.714872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.714884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.715154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.715311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.715326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.715601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.715808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.715821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.716090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.716383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.716395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.716689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.717004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.717017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.717315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.717623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.717635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.717938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.718221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.718234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.718457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.718749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.718763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.718986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.719273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.719287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.719509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.719798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.719811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.720029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.720315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.720329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.720546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.720776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.720789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.721089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.721301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.721314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.721586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.721810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.721823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.722093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.722329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.722342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.722654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.722871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.722884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.723154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.723371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.723384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.723562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.723775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.723788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.724065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.724312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.724325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.724600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.724813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.724826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.725046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.725327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.725341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.725613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.725879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.725892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.726175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.726443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.726456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.726726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.727010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.727023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.526 qpair failed and we were unable to recover it. 00:33:27.526 [2024-04-24 10:28:40.727299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.526 [2024-04-24 10:28:40.727512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.727525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.727762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.727919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.727932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.728198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.728430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.728443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.728712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.728863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.728876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.729170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.729437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.729449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.729668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.729868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.729881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.730082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.730392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.730405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.730609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.730833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.730846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.731121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.731407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.731420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.731688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.732006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.732019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.732334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.732626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.732639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.732959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.733228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.733242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.733458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.733657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.733671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.733971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.734190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.734203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.734523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.734761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.734775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.735022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.735298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.735313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.735605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.735824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.735837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.736124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.736417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.736430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.736674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.736959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.736971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.737177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.737399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.737412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.737640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.737884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.737897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.738198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.738444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.738456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.738601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.738894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.738908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.739205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.739428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.739441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.739655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.739897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.739911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.740213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.740508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.740520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.527 qpair failed and we were unable to recover it. 00:33:27.527 [2024-04-24 10:28:40.740753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.741048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.527 [2024-04-24 10:28:40.741062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.741345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.741616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.741629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.741876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.742143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.742156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.742376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.742585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.742598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.742807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.743079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.743092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.743402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.743694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.743707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.743951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.744156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.744170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.744467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.744735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.744748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.744983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.745277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.745291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.745594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.745837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.745849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.746086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.746328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.746341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.746615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.746899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.746912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.747157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.747407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.747420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.747726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.747974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.747988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.748270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.748497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.748510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.748808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.749080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.749094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.749256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.749520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.749534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.749851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.750066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.750089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.750313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.750603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.750616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.750832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.751118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.751132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.751431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.751666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.751679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.751898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.752110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.752124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2afc000b90 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.752441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.752668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.752684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.752903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.753177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.753193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.753501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.753790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.753803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.754114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.754409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.754424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.528 [2024-04-24 10:28:40.754658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.754916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.528 [2024-04-24 10:28:40.754930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.528 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.755203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.755495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.755509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.755729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.756009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.756022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.756296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.756440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.756453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.756747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.757021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.757033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.757307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.757529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.757543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.757814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.758040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.758054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.758295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.758571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.758585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.758866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.759133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.759147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.759382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.759603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.759616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.759813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.760067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.760087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.760251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.760419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.760432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.760608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.760876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.760890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.761163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.761428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.761441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.761735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.761832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.761846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.762115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.762277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.762290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.762494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.762716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.762732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.762946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.763110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.763124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.763265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.763420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.763433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.763683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.763967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.763980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.764133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.764350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.764363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.764530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.764796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.764810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.765085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.765360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.765374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.765528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.765734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.765747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.766058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.766298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.766311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.766518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.766731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.766745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.766990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.767213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.767228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.767509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.767728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.767741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.767987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.768244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.768258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.529 [2024-04-24 10:28:40.768557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.768796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.529 [2024-04-24 10:28:40.768809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.529 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.769033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.769247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.769261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.769406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.769687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.769700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.769914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.770184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.770198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.770358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.770567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.770581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.770855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.771120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.771135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.771369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.771622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.771635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.771927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.772139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.772154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.772324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.772478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.772492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.772649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.772857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.772870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.772959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.773171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.773186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.773458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.773660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.773673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.773879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.774098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.774112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.774351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.774582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.774596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.774804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.775021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.775034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.775311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.775577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.775590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.775807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.776024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.776038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.776205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.776357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.776370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.776612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.776761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.776774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.777044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.777260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.777274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.777501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.777650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.777663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.777879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.778097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.778111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.778325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.778539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.778551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.778825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.778980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.778993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.779267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.779408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.779422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.779637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.779860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.779874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.780031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.780252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.780266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.530 [2024-04-24 10:28:40.780486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.780697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.530 [2024-04-24 10:28:40.780710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.530 qpair failed and we were unable to recover it. 00:33:27.800 [2024-04-24 10:28:40.780935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.781173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.781187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.800 qpair failed and we were unable to recover it. 00:33:27.800 [2024-04-24 10:28:40.781459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.781768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.781781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.800 qpair failed and we were unable to recover it. 00:33:27.800 [2024-04-24 10:28:40.781934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.782207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.782221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.800 qpair failed and we were unable to recover it. 00:33:27.800 [2024-04-24 10:28:40.782513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.782663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.782677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.800 qpair failed and we were unable to recover it. 00:33:27.800 [2024-04-24 10:28:40.782950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.783169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.783182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.800 qpair failed and we were unable to recover it. 00:33:27.800 [2024-04-24 10:28:40.783475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.783691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.783704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.800 qpair failed and we were unable to recover it. 00:33:27.800 [2024-04-24 10:28:40.783861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.784108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.784123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.800 qpair failed and we were unable to recover it. 00:33:27.800 [2024-04-24 10:28:40.784345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.784551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.784564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.800 qpair failed and we were unable to recover it. 00:33:27.800 [2024-04-24 10:28:40.784783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.785077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.800 [2024-04-24 10:28:40.785091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.800 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.785310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.785471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.785484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.785724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.785924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.785940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.786237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.786517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.786530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.786705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.786906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.786919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.787158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.787374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.787387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.787549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.787762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.787775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.788005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.788167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.788180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.788387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.788533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.788546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.788814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.788900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.788913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.789063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.789280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.789293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.789504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.789731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.789744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.790061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.790301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.790315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.790562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.790775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.790788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.790993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.791279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.791292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.791564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.791743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.791756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.791968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.792133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.792146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.792374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.792661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.792674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.792909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.793059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.793080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.793293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.793493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.793506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.793657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.793895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.793909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.794144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.794343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.794356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.794514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.794737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.794751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.794922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.795084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.795097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.795261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.795464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.795477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.795692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.795828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.795840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.796081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.796319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.796332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.796645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.796805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.796819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.797050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.797320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.797333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.801 [2024-04-24 10:28:40.797635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.797860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.801 [2024-04-24 10:28:40.797872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.801 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.798029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.798181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.798194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.798407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.798575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.798588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.798906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.799147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.799161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.799323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.799591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.799604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.799884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.800041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.800054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.800275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.800540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.800554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.800778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.800929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.800943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.801194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.801286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.801298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.801515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.801671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.801684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.801909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.802140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.802154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.802252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.802459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.802473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.802661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.802858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.802871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.803115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.803267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.803279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.803520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.803683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.803698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.803864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.804093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.804107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.804267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.804482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.804495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.804766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.805033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.805046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.805276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.805488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.805501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.805789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.806008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.806021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.806248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.806452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.806466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.806790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.807007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.807020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.807240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.807449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.807462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.807695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.807917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.807930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.808167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.808449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.808467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.808617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.808853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.808866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.809177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.809330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.809344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.809574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.809842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.809855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.810127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.810289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.810302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.802 [2024-04-24 10:28:40.810544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.810758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.802 [2024-04-24 10:28:40.810771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.802 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.811075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.811301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.811314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.811528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.811742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.811755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.812026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.812229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.812243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.812515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.812735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.812748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.812983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.813280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.813294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.813517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.813729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.813742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.814027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.814261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.814274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.814489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.814756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.814769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.814912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.815186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.815200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.815407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.815608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.815622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.815791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.816026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.816039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.816256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.816459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.816473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.816698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.816930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.816943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.817185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.817466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.817479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.817764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.818030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.818043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.818198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.818350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.818363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.818607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.818876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.818889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.819041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.819328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.819342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.819629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.819897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.819910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.820090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.820246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.820259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.820415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.820636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.820648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.820864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.821066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.821084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.821241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.821465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.821478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.821643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.821915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.821929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.822165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.822452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.822466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.822712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.822981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.822994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.823155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.823417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.823430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.823699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.823844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.823857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.803 [2024-04-24 10:28:40.824074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.824235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.803 [2024-04-24 10:28:40.824248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.803 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.824411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.824560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.824573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.824802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.825014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.825028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.825322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.825494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.825507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.825778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.825939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.825952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.826164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.826471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.826483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.826705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.826904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.826917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.827139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.827358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.827374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.827520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.827722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.827735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.827900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.828112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.828127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.828397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.828543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.828556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.828701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.828905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.828918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.829237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.829446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.829459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.829629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.829841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.829854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.830094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.830313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.830327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.830488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.830620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.830633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.830860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.831088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.831102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.831255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.831534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.831547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.831713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.831925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.831938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.832154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.832365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.832378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.832668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.832817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.832829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.832980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.833218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.833232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.833518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.833753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.833766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.834059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.834206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.834231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.834445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.834713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.834726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.834934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.835141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.835156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.835426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.835627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.835640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.835851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.836003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.836015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.836173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.836451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.836464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.804 [2024-04-24 10:28:40.836621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.836910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.804 [2024-04-24 10:28:40.836922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.804 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.837077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.837363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.837376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.837598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.837888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.837901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.838210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.838410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.838424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.838650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.838866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.838879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.839028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.839260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.839275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.839488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.839780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.839793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.839967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.840259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.840273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.840435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.840633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.840646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3710 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.840888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.841121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.841139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.841288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.841506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.841519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.841788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.841999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.842013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.842283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.842442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.842455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.842761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.842982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.842996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.843144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.843280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.843295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.843462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.843710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.843723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.843889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.844038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.844050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.844293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.844491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.844505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.844713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.844949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.844963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.845271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.845595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.845608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.845828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.846053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.846067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.846292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.846423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.846436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.846726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.846898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.846911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.847186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.847347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.847361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.805 qpair failed and we were unable to recover it. 00:33:27.805 [2024-04-24 10:28:40.847660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.805 [2024-04-24 10:28:40.847824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.847838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.848113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.848342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.848356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.848592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.848737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.848749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.849031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.849151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.849165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.849386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.849527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.849541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.849747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.849886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.849901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.850053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.850329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.850344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.850618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.850820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.850833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.851127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.851408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.851421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.851599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.851812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.851825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.851991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.852205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.852220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.852554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.852692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.852706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.853000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.853177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.853191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.853498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.853702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.853715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.853823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.854090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.854105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.854251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.854458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.854473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.854682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.854920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.854933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.855257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.855414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.855427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.855642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.855913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.855926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.856143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.856366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.856379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.856590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.856902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.856915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.857121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.857411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.857424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.857642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.857928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.857942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.858113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.858380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.858393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.858598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.858808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.858822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.859062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.859221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.859235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.859382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.859589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.859603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.859824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.860094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.860108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.860246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.860483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.806 [2024-04-24 10:28:40.860497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.806 qpair failed and we were unable to recover it. 00:33:27.806 [2024-04-24 10:28:40.860714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.860877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.860890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.861044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.861214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.861229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.861383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.861609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.861622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.861791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.862060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.862078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.862247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.862447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.862461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.862613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.862901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.862914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.863005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.863164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.863181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.863456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.863671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.863684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.863899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.864051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.864065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.864319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.864535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.864548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.864753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.864979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.864992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.865095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.865242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.865256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.865477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.865611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.865624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.865849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.866011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.866025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.866250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.866397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.866410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.866628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.866930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.866943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.867184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.867477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.867493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.867703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.867993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.868006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.868242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.868450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.868463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.868748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.869014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.869028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.869242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.869329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.869342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.869635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.869843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.869857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.870063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.870226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.870239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.870455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.870656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.870669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.870826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.871042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.871055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.871277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.871500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.871515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.871651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.871785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.871801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.872010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.872212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.872227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.807 [2024-04-24 10:28:40.872377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.872523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.807 [2024-04-24 10:28:40.872536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.807 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.872692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.872908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.872921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.873199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.873408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.873422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.873646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.873851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.873864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.874068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.874285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.874298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.874591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.874831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.874844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.875088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.875313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.875327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.875545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.875693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.875706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.875862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.876157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.876171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.876378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.876591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.876605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.876833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.877102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.877116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.877341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.877476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.877489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.877649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.877866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.877879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.878037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.878197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.878210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.878415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.878563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.878576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.878741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.878939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.878952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.879158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.879389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.879403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.879627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.879938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.879951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.880092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.880197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.880210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.880442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.880587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.880600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.880824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.881034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.881048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.881298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.881501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.881514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.881727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.881888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.881901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.882116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.882333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.882346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.882481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.882581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.882593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.882684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.882908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.882922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.883157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.883385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.883398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.883628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.883784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.883797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.883950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.884217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.884231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.808 qpair failed and we were unable to recover it. 00:33:27.808 [2024-04-24 10:28:40.884458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.884726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.808 [2024-04-24 10:28:40.884740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.884995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.885261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.885275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.885502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.885718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.885732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.886037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.886253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.886266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.886535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.886680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.886693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.886962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.887178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.887192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.887344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.887486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.887499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.887721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.887931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.887945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.888215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.888502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.888516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.888828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.889073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.889086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.889331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.889611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.889624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.889837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.890037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.890050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.890277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.890546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.890559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.890718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.890945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.890958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.891180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.891421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.891434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.891589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.891792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.891805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.891954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.892127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.892142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.892364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.892599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.892613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.892861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.893021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.893034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.893199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.893412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.893425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.893588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.893788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.893801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.894026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.894185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.894198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.894358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.894573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.894586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.894826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.895027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.895039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.895251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.895454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.809 [2024-04-24 10:28:40.895467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.809 qpair failed and we were unable to recover it. 00:33:27.809 [2024-04-24 10:28:40.895742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.896048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.896061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.896358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.896594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.896607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.896900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.897048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.897061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.897335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.897532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.897545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.897867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.898033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.898046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.898324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.898591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.898604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.898887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.899158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.899171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.899487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.899727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.899740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.900020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.900184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.900198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.900491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.900638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.900652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.900945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.901240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.901254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.901566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.901769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.901781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.902079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.902350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.902363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.902588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.902826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.902839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.903115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.903318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.903330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.903561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.903849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.903862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.904091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.904329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.904342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.904613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.904875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.904888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.905136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.905408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.905421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.905659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.905874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.905887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.906165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.906448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.906461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.906688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.906966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.906978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.907201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.907424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.907437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.907592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.907741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.907754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.907988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.908199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.908212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.908489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.908777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.908790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.909011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.909301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.909315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.909466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.909622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.909634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.810 qpair failed and we were unable to recover it. 00:33:27.810 [2024-04-24 10:28:40.909853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.910064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.810 [2024-04-24 10:28:40.910081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.910251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.910492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.910505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.910709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.911020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.911032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.911348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.911578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.911591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.911852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.912122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.912136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.912411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.912682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.912695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.912911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.913191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.913204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.913448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.913781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.913794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.914094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.914321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.914334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.914552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.914822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.914835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.914988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.915233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.915247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.915543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.915773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.915786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.915991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.916144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.916157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.916510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.916798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.916811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.917053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.917345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.917358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.917567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.917763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.917776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.918061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.918274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.918287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.918499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.918786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.918799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.919067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.919278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.919291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.919517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.919807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.919820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.920049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.920278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.920292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.920625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.920914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.920928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.921166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.921318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.921339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.921613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.921907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.921920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.922142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.922418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.922431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.922701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.922897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.922910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.923219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.923436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.923449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.923738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.924017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.924031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 [2024-04-24 10:28:40.924347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 10:28:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:27.811 [2024-04-24 10:28:40.924642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.811 [2024-04-24 10:28:40.924658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.811 qpair failed and we were unable to recover it. 00:33:27.811 10:28:40 -- common/autotest_common.sh@852 -- # return 0 00:33:27.811 [2024-04-24 10:28:40.924928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.925133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.925147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 10:28:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:27.812 [2024-04-24 10:28:40.925437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 10:28:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:27.812 [2024-04-24 10:28:40.925666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.925681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 10:28:40 -- common/autotest_common.sh@10 -- # set +x 00:33:27.812 [2024-04-24 10:28:40.925907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.926226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.926240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.926399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.926735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.926748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.927019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.927321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.927335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.927559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.927675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.927688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.927904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.928066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.928092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.928390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.928559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.928575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.928789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.928992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.929004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.929103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.929328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.929341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.929500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.929662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.929675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.929831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.929982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.929996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.930156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.930472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.930486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.930622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.930886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.930899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.931067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.931284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.931297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.931525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.931773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.931786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.932038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.932340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.932354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.932575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.932824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.932837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.933003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.933221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.933234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.933505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.933707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.933720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.934017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.934160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.934173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.934331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.934468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.934481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.934694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.934933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.934946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.935281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.935557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.935570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.935800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.936031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.936044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.936281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.936493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.936507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.936825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.937143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.937158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.937396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.937607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.937621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.812 qpair failed and we were unable to recover it. 00:33:27.812 [2024-04-24 10:28:40.937957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.812 [2024-04-24 10:28:40.938253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.938269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.938427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.938697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.938711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.938983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.939228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.939241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.939469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.939669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.939682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.939958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.940231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.940245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.940516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.940676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.940690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.940909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.941142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.941156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.941450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.941670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.941683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.941951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.942231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.942245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.942421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.942590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.942602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.942860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.943105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.943118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.943287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.943498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.943511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.943732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.943892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.943906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.944200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.944406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.944420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.944618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.944773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.944787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.945012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.945221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.945235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.945464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.945682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.945695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.945942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.946292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.946307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.946462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.946683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.946696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.946937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.947139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.947153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.947381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.947542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.947555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.947723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.948006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.948019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.948341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.948567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.948580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.948744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.948957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.948970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.949235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.949507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.949520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.949697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.949932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.949945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.950165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.950332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.950345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.950504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.950675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.950689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.950940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.951205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.951219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.813 qpair failed and we were unable to recover it. 00:33:27.813 [2024-04-24 10:28:40.951431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.813 [2024-04-24 10:28:40.951651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.951665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.951948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.952178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.952192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.952366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.952530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.952544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.952710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.952947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.952960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.953246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.953416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.953429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.953605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.953868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.953881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.954177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.954415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.954428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.954629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.954914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.954928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.955149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.955370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.955383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.955563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.955740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.955754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.956000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.956239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.956254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.956417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.956633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.956646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.956910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.957053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.957068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 10:28:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:27.814 [2024-04-24 10:28:40.957264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.957434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.957447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 10:28:40 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:27.814 [2024-04-24 10:28:40.957613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.957775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.957790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 10:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.814 [2024-04-24 10:28:40.958008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 10:28:40 -- common/autotest_common.sh@10 -- # set +x 00:33:27.814 [2024-04-24 10:28:40.958240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.958255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.958419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.958578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.958591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.958809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.959090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.959105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.959251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.959431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.959445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.959664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.959883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.959896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.960169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.960340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.960353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.960518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.960821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.960834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.961045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.961363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.961376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.961597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.961834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.961847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.814 qpair failed and we were unable to recover it. 00:33:27.814 [2024-04-24 10:28:40.962018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.814 [2024-04-24 10:28:40.962243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.962257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.962416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.962578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.962592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.962891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.963133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.963147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.963368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.963546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.963559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.963729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.963962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.963976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.964182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.964399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.964412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.964636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.964799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.964813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.964990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.965254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.965268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.965403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.965563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.965578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.965744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.965910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.965924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.966135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.966355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.966370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.966546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.966772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.966786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.967066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.967241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.967254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.967548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.967757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.967771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.967943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.968151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.968165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.968320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.968528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.968542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.968703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.968916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.968930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.969170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.969337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.969351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.969596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.969814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.969828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.970030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.970188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.970203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.970415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.970589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.970603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.970760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.970912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.970926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.971060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.971219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.971233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.971459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.971609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.971622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.971820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.971966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.971981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.972206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.972353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.972367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.972531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.972661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.972678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.972838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.972988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.973002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.973226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.973390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.815 [2024-04-24 10:28:40.973403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.815 qpair failed and we were unable to recover it. 00:33:27.815 [2024-04-24 10:28:40.973560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.973713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.973727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.973881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.974016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.974031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.974271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.974572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.974588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.974733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.974945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.974959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.975128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.975281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.975295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.975432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.975645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.975660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.975805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.975953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.975967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.976108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.976321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.976340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.976548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.976698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.976712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.976857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.977085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.977100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.977262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.977397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.977411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.977554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.977783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.977797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.977953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.978079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.978093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.978400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.978616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.978629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 Malloc0 00:33:27.816 [2024-04-24 10:28:40.978845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.979081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.979095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.979232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 10:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.816 [2024-04-24 10:28:40.979379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.979392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.979600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 10:28:40 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:27.816 [2024-04-24 10:28:40.979735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.979749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.979957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 10:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.816 [2024-04-24 10:28:40.980183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.980205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 10:28:40 -- common/autotest_common.sh@10 -- # set +x 00:33:27.816 [2024-04-24 10:28:40.980418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.980623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.980636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.980867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.981081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.981095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.981316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.981450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.981463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.981687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.981823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.981836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.982001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.982200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.982214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.982374] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.816 [2024-04-24 10:28:40.982428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.982584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.982598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.982813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.982952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.982965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.983124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.983329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.983342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.983502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.983636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.983649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.816 [2024-04-24 10:28:40.983854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.984135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.816 [2024-04-24 10:28:40.984149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.816 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.984287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.984442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.984455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.984612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.984854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.984867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.985067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.985213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.985227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.985446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.985600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.985614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.985831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.986062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.986079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.986308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.986524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.986538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.986696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.986828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.986840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.986991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.987192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.987205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.987352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.987506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.987519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.987654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.987804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.987817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.988039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.988256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.988270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.988420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.988575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.988589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.988806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.989010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.989023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.989187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.989470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.989484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.989703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.989840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.989854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.990009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.990213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.990227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.990473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.990684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.990697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.990872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 10:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.817 [2024-04-24 10:28:40.991011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.991025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.991166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 10:28:40 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:27.817 [2024-04-24 10:28:40.991377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.991394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 10:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.817 [2024-04-24 10:28:40.991698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.991880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.991894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 10:28:40 -- common/autotest_common.sh@10 -- # set +x 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.992113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.992279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.992293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.992452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.992587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.992602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.992767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.992915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.992929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.993078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.993258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.993272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.993481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.993699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.993712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.993875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.993971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.993985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.994273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.994588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.994601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.817 [2024-04-24 10:28:40.994741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.994870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.817 [2024-04-24 10:28:40.994883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.817 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.995103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.995258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.995272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.995416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.995579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.995593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.995744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.995922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.995935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.996085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.996170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.996183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.996326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.996483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.996498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.996651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.996813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.996826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.997138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.997373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.997386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.997529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.997676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.997688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.997829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.998080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.998094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.998369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.998585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.998598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:40.998782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.998932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.998946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 10:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.818 [2024-04-24 10:28:40.999102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.999249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.999262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 10:28:40 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:27.818 [2024-04-24 10:28:40.999378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.999618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:40.999631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 10:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.818 [2024-04-24 10:28:40.999842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 10:28:40 -- common/autotest_common.sh@10 -- # set +x 00:33:27.818 [2024-04-24 10:28:41.000068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.000087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.000233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.000351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.000364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.000473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.000676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.000689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.000831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.000987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.001000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.001213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.001372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.001385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.001494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.001651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.001664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.001818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.001924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.001939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.002148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.002361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.002373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.002535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.002668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.002681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.002832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.002971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.002984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.818 qpair failed and we were unable to recover it. 00:33:27.818 [2024-04-24 10:28:41.003205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.003406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.818 [2024-04-24 10:28:41.003419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.003634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.003782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.003796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.003930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.004074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.004087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.004235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.004380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.004393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.004548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.004684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.004696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.004835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.004927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.004940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.005109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.005201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.005213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.005356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.005626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.005640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.005881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.005987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.006001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.006153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.006292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.006304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.006521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.006794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.006808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.006969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 10:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.819 [2024-04-24 10:28:41.007190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.007204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.007357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 10:28:41 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.819 [2024-04-24 10:28:41.007555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.007568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 10:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.819 [2024-04-24 10:28:41.007853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 10:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:27.819 [2024-04-24 10:28:41.008142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.008156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.008323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.008548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.008562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.008816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.009039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.009052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.009204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.009366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.009379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.009544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.009698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.009711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.009985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.010204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.010218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aec000b90 with addr=10.0.0.2, port=4420 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.010439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.819 [2024-04-24 10:28:41.010589] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.819 [2024-04-24 10:28:41.013452] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:33:27.819 [2024-04-24 10:28:41.013505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f2aec000b90 (107): Transport endpoint is not connected 00:33:27.819 [2024-04-24 10:28:41.013557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 10:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.819 10:28:41 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:27.819 10:28:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.819 10:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:27.819 10:28:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.819 [2024-04-24 10:28:41.023076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.819 [2024-04-24 10:28:41.023174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.819 [2024-04-24 10:28:41.023195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.819 [2024-04-24 10:28:41.023203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.819 [2024-04-24 10:28:41.023209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:27.819 [2024-04-24 10:28:41.023227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 10:28:41 -- host/target_disconnect.sh@58 -- # wait 500761 00:33:27.819 [2024-04-24 10:28:41.032941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.819 [2024-04-24 10:28:41.033019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.819 [2024-04-24 10:28:41.033037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.819 [2024-04-24 10:28:41.033045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.819 [2024-04-24 10:28:41.033051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:27.819 [2024-04-24 10:28:41.033067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.042823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.819 [2024-04-24 10:28:41.042902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.819 [2024-04-24 10:28:41.042920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.819 [2024-04-24 10:28:41.042927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.819 [2024-04-24 10:28:41.042934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:27.819 [2024-04-24 10:28:41.042950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.819 qpair failed and we were unable to recover it. 00:33:27.819 [2024-04-24 10:28:41.052900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.820 [2024-04-24 10:28:41.052977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.820 [2024-04-24 10:28:41.052994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.820 [2024-04-24 10:28:41.053001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.820 [2024-04-24 10:28:41.053007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:27.820 [2024-04-24 10:28:41.053023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.820 qpair failed and we were unable to recover it. 00:33:27.820 [2024-04-24 10:28:41.062929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.820 [2024-04-24 10:28:41.063001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.820 [2024-04-24 10:28:41.063017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.820 [2024-04-24 10:28:41.063024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.820 [2024-04-24 10:28:41.063030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:27.820 [2024-04-24 10:28:41.063045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:27.820 qpair failed and we were unable to recover it. 00:33:28.080 [2024-04-24 10:28:41.072920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.080 [2024-04-24 10:28:41.072996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.080 [2024-04-24 10:28:41.073014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.073021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.073028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.073044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.082972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.083051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.083077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.083085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.083092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.083108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.092966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.093045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.093063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.093074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.093081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.093098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.102992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.103073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.103089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.103095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.103101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.103118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.113014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.113099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.113116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.113123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.113130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.113146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.123084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.123164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.123181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.123188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.123194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.123214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.133063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.133139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.133155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.133162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.133171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.133186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.143155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.143235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.143252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.143259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.143264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.143279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.153225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.153332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.153349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.153356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.153363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.153379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.163162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.163239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.163258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.163265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.163272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.163289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.173242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.173317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.173337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.173344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.173350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.173365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.183199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.183276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.183292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.081 [2024-04-24 10:28:41.183299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.081 [2024-04-24 10:28:41.183305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.081 [2024-04-24 10:28:41.183320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.081 qpair failed and we were unable to recover it. 00:33:28.081 [2024-04-24 10:28:41.193233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.081 [2024-04-24 10:28:41.193305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.081 [2024-04-24 10:28:41.193320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.193330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.193336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.082 [2024-04-24 10:28:41.193351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.203310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.203430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.203446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.203453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.203459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.082 [2024-04-24 10:28:41.203475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.213353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.213438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.213455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.213462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.213468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.082 [2024-04-24 10:28:41.213487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.223334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.223409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.223427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.223434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.223442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.082 [2024-04-24 10:28:41.223457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.233359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.233453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.233467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.233474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.233500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.082 [2024-04-24 10:28:41.233515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.243413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.243494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.243509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.243516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.243525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.082 [2024-04-24 10:28:41.243540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.253442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.253518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.253533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.253540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.253547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.082 [2024-04-24 10:28:41.253563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.263626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.263723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.263740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.263747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.263753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:28.082 [2024-04-24 10:28:41.263769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.263796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1200 (9): Bad file descriptor 00:33:28.082 [2024-04-24 10:28:41.273604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.273735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.273757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.273765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.273772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.082 [2024-04-24 10:28:41.273791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.283543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.283620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.283636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.283643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.283649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.082 [2024-04-24 10:28:41.283665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.293655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.293734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.293751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.293758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.293765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.082 [2024-04-24 10:28:41.293780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.082 [2024-04-24 10:28:41.303575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.082 [2024-04-24 10:28:41.303652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.082 [2024-04-24 10:28:41.303669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.082 [2024-04-24 10:28:41.303679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.082 [2024-04-24 10:28:41.303685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.082 [2024-04-24 10:28:41.303702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.082 qpair failed and we were unable to recover it. 00:33:28.083 [2024-04-24 10:28:41.313646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.083 [2024-04-24 10:28:41.313716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.083 [2024-04-24 10:28:41.313733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.083 [2024-04-24 10:28:41.313741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.083 [2024-04-24 10:28:41.313747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.083 [2024-04-24 10:28:41.313762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.083 qpair failed and we were unable to recover it. 00:33:28.083 [2024-04-24 10:28:41.323651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.083 [2024-04-24 10:28:41.323733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.083 [2024-04-24 10:28:41.323750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.083 [2024-04-24 10:28:41.323757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.083 [2024-04-24 10:28:41.323764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.083 [2024-04-24 10:28:41.323779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.083 qpair failed and we were unable to recover it. 00:33:28.083 [2024-04-24 10:28:41.333709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.083 [2024-04-24 10:28:41.333836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.083 [2024-04-24 10:28:41.333852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.083 [2024-04-24 10:28:41.333859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.083 [2024-04-24 10:28:41.333865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.083 [2024-04-24 10:28:41.333882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.083 qpair failed and we were unable to recover it. 00:33:28.083 [2024-04-24 10:28:41.343727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.083 [2024-04-24 10:28:41.343806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.083 [2024-04-24 10:28:41.343821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.083 [2024-04-24 10:28:41.343829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.083 [2024-04-24 10:28:41.343834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.083 [2024-04-24 10:28:41.343850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.083 qpair failed and we were unable to recover it. 00:33:28.083 [2024-04-24 10:28:41.353864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.083 [2024-04-24 10:28:41.353940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.083 [2024-04-24 10:28:41.353957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.083 [2024-04-24 10:28:41.353964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.083 [2024-04-24 10:28:41.353970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.083 [2024-04-24 10:28:41.353986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.083 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.363717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.363791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.363807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.363814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.363820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.363836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.373763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.373843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.373859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.373866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.373872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.373887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.383777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.383854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.383871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.383878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.383884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.383899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.393806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.393884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.393904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.393911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.393917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.393932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.403900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.403976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.403993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.404001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.404007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.404024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.413867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.413945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.413961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.413968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.413973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.413988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.423974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.424052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.424072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.424080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.424086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.424101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.433947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.434040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.434056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.434063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.434069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.434092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.443996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.444074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.444093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.444100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.444106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.444121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.344 qpair failed and we were unable to recover it. 00:33:28.344 [2024-04-24 10:28:41.454063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.344 [2024-04-24 10:28:41.454146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.344 [2024-04-24 10:28:41.454162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.344 [2024-04-24 10:28:41.454169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.344 [2024-04-24 10:28:41.454175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.344 [2024-04-24 10:28:41.454190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.464099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.464181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.464197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.464204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.464210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.464225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.474100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.474184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.474200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.474208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.474214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.474229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.484144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.484225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.484246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.484253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.484259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.484274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.494167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.494296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.494312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.494319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.494326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.494341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.504198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.504267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.504284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.504291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.504296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.504312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.514235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.514309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.514325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.514332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.514338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.514353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.524303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.524377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.524394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.524400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.524407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.524425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.534260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.534336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.534353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.534360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.534365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.534380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.544330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.544403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.544419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.544426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.544433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.544448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.554356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.554463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.554479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.554486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.554492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.554507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.564372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.564446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.564462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.564470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.564476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.564491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.574445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.574565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.574585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.345 [2024-04-24 10:28:41.574592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.345 [2024-04-24 10:28:41.574598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.345 [2024-04-24 10:28:41.574613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.345 qpair failed and we were unable to recover it. 00:33:28.345 [2024-04-24 10:28:41.584441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.345 [2024-04-24 10:28:41.584518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.345 [2024-04-24 10:28:41.584534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.346 [2024-04-24 10:28:41.584540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.346 [2024-04-24 10:28:41.584546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.346 [2024-04-24 10:28:41.584561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-04-24 10:28:41.594478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.346 [2024-04-24 10:28:41.594554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.346 [2024-04-24 10:28:41.594570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.346 [2024-04-24 10:28:41.594576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.346 [2024-04-24 10:28:41.594583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.346 [2024-04-24 10:28:41.594597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-04-24 10:28:41.604429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.346 [2024-04-24 10:28:41.604513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.346 [2024-04-24 10:28:41.604529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.346 [2024-04-24 10:28:41.604536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.346 [2024-04-24 10:28:41.604542] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.346 [2024-04-24 10:28:41.604557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.346 [2024-04-24 10:28:41.614457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.346 [2024-04-24 10:28:41.614534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.346 [2024-04-24 10:28:41.614551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.346 [2024-04-24 10:28:41.614558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.346 [2024-04-24 10:28:41.614567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.346 [2024-04-24 10:28:41.614583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.346 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.624570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.624648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.607 [2024-04-24 10:28:41.624665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.607 [2024-04-24 10:28:41.624672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.607 [2024-04-24 10:28:41.624679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.607 [2024-04-24 10:28:41.624693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.607 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.634515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.634596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.607 [2024-04-24 10:28:41.634612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.607 [2024-04-24 10:28:41.634618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.607 [2024-04-24 10:28:41.634624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.607 [2024-04-24 10:28:41.634640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.607 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.644599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.644676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.607 [2024-04-24 10:28:41.644693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.607 [2024-04-24 10:28:41.644700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.607 [2024-04-24 10:28:41.644707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.607 [2024-04-24 10:28:41.644722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.607 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.654567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.654645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.607 [2024-04-24 10:28:41.654661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.607 [2024-04-24 10:28:41.654668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.607 [2024-04-24 10:28:41.654674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.607 [2024-04-24 10:28:41.654689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.607 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.664665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.664795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.607 [2024-04-24 10:28:41.664811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.607 [2024-04-24 10:28:41.664818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.607 [2024-04-24 10:28:41.664824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.607 [2024-04-24 10:28:41.664839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.607 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.674711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.674786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.607 [2024-04-24 10:28:41.674802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.607 [2024-04-24 10:28:41.674809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.607 [2024-04-24 10:28:41.674815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.607 [2024-04-24 10:28:41.674830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.607 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.684721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.684795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.607 [2024-04-24 10:28:41.684811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.607 [2024-04-24 10:28:41.684818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.607 [2024-04-24 10:28:41.684824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.607 [2024-04-24 10:28:41.684839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.607 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.694689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.694768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.607 [2024-04-24 10:28:41.694784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.607 [2024-04-24 10:28:41.694790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.607 [2024-04-24 10:28:41.694796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.607 [2024-04-24 10:28:41.694810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.607 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.704769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.704837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.607 [2024-04-24 10:28:41.704853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.607 [2024-04-24 10:28:41.704859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.607 [2024-04-24 10:28:41.704871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.607 [2024-04-24 10:28:41.704886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.607 qpair failed and we were unable to recover it. 00:33:28.607 [2024-04-24 10:28:41.714830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.607 [2024-04-24 10:28:41.714910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.714926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.714933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.714939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.714954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.724846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.724921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.724938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.724945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.724952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.724967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.734897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.734976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.734992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.734999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.735005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.735020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.744872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.744989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.745004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.745011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.745018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.745032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.754955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.755028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.755044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.755051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.755057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.755077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.764988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.765063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.765083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.765090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.765097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.765111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.775021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.775128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.775144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.775150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.775157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.775171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.785031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.785111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.785127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.785134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.785140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.785155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.795054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.795129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.795145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.795155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.795162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.795177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.805080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.805154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.805170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.805177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.805183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.805198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.815207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.815326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.815342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.815349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.815355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.815370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.825141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.825222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.825238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.825245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.825251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.825265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.608 [2024-04-24 10:28:41.835176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.608 [2024-04-24 10:28:41.835252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.608 [2024-04-24 10:28:41.835268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.608 [2024-04-24 10:28:41.835275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.608 [2024-04-24 10:28:41.835281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.608 [2024-04-24 10:28:41.835295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.608 qpair failed and we were unable to recover it. 00:33:28.609 [2024-04-24 10:28:41.845237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.609 [2024-04-24 10:28:41.845378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.609 [2024-04-24 10:28:41.845393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.609 [2024-04-24 10:28:41.845400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.609 [2024-04-24 10:28:41.845406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.609 [2024-04-24 10:28:41.845422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.609 qpair failed and we were unable to recover it. 00:33:28.609 [2024-04-24 10:28:41.855231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.609 [2024-04-24 10:28:41.855313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.609 [2024-04-24 10:28:41.855328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.609 [2024-04-24 10:28:41.855336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.609 [2024-04-24 10:28:41.855342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.609 [2024-04-24 10:28:41.855356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.609 qpair failed and we were unable to recover it. 00:33:28.609 [2024-04-24 10:28:41.865275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.609 [2024-04-24 10:28:41.865353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.609 [2024-04-24 10:28:41.865369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.609 [2024-04-24 10:28:41.865376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.609 [2024-04-24 10:28:41.865382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.609 [2024-04-24 10:28:41.865397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.609 qpair failed and we were unable to recover it. 00:33:28.609 [2024-04-24 10:28:41.875296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.609 [2024-04-24 10:28:41.875373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.609 [2024-04-24 10:28:41.875388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.609 [2024-04-24 10:28:41.875395] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.609 [2024-04-24 10:28:41.875401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.609 [2024-04-24 10:28:41.875415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.609 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.885323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.885428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.885443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.885454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.885462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.885477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.895355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.895429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.895445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.895452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.895458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.895473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.905383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.905462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.905478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.905485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.905491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.905506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.915479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.915555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.915571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.915578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.915584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.915599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.925431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.925518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.925533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.925540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.925546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.925561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.935463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.935547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.935563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.935570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.935576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.935591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.945524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.945599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.945615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.945622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.945628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.945643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.955477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.955550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.955566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.955573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.955579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.955593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.965586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.965664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.965680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.965687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.965693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.965707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.975578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.975659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.975680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.975686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.975692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.975707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.985599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.985712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.985729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.870 [2024-04-24 10:28:41.985736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.870 [2024-04-24 10:28:41.985742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.870 [2024-04-24 10:28:41.985756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.870 qpair failed and we were unable to recover it. 00:33:28.870 [2024-04-24 10:28:41.995650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.870 [2024-04-24 10:28:41.995724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.870 [2024-04-24 10:28:41.995740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:41.995747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:41.995752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:41.995767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.005659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.005734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.005750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.005757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.005763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.005777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.015696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.015774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.015790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.015797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.015803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.015822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.025720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.025800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.025817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.025826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.025833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.025849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.035752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.035825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.035842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.035850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.035856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.035872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.045768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.045845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.045861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.045868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.045875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.045890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.055726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.055807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.055823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.055830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.055837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.055852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.065826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.065922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.065943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.065950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.065956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.065971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.075863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.075940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.075956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.075962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.075969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.075984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.085857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.085939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.085955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.085962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.085969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.085983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.095859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.095948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.095964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.095971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.095977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.095991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.105947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.106035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.106049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.106056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.106065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.106084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.115968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.116045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.116061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.116068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.871 [2024-04-24 10:28:42.116079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.871 [2024-04-24 10:28:42.116094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.871 qpair failed and we were unable to recover it. 00:33:28.871 [2024-04-24 10:28:42.125960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.871 [2024-04-24 10:28:42.126035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.871 [2024-04-24 10:28:42.126050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.871 [2024-04-24 10:28:42.126057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.872 [2024-04-24 10:28:42.126063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.872 [2024-04-24 10:28:42.126082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.872 qpair failed and we were unable to recover it. 00:33:28.872 [2024-04-24 10:28:42.136028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.872 [2024-04-24 10:28:42.136108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.872 [2024-04-24 10:28:42.136124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.872 [2024-04-24 10:28:42.136131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.872 [2024-04-24 10:28:42.136137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.872 [2024-04-24 10:28:42.136152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.872 qpair failed and we were unable to recover it. 00:33:28.872 [2024-04-24 10:28:42.146075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.872 [2024-04-24 10:28:42.146153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.872 [2024-04-24 10:28:42.146169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.872 [2024-04-24 10:28:42.146177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.872 [2024-04-24 10:28:42.146183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:28.872 [2024-04-24 10:28:42.146198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.872 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.156093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.156239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.156256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.156263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.156269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.156285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.166114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.166207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.166223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.166230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.166237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.166252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.176159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.176252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.176268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.176275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.176281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.176296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.186193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.186267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.186283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.186290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.186296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.186311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.196212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.196292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.196307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.196314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.196323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.196338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.206231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.206305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.206321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.206328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.206334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.206349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.216304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.216412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.216428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.216435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.216441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.216455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.226316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.226395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.226411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.226417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.226423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.226438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.236329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.236400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.236415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.236422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.236428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.236443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.246366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.246443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.246459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.246465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.246471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.246486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.256392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.256467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.256484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.256490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.256496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.132 [2024-04-24 10:28:42.256511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.132 qpair failed and we were unable to recover it. 00:33:29.132 [2024-04-24 10:28:42.266434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.132 [2024-04-24 10:28:42.266507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.132 [2024-04-24 10:28:42.266523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.132 [2024-04-24 10:28:42.266530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.132 [2024-04-24 10:28:42.266536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.266551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.276489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.276592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.276607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.276614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.276620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.276635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.286482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.286559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.286575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.286585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.286591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.286606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.296515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.296593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.296609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.296615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.296622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.296636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.306534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.306614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.306630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.306637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.306643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.306658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.316563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.316637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.316653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.316660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.316666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.316681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.326583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.326658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.326674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.326681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.326687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.326702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.336583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.336661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.336677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.336683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.336689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.336704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.346652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.346763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.346779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.346786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.346793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.346808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.356680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.356760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.356775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.356782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.356789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.356804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.366715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.366795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.366812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.366819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.366825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.366839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.376777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.376880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.376896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.376906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.376912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.376926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.386800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.386876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.386893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.386899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.386905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.386920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.396825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.396894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.396910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.396916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.396922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.396937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.133 [2024-04-24 10:28:42.406838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.133 [2024-04-24 10:28:42.406917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.133 [2024-04-24 10:28:42.406932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.133 [2024-04-24 10:28:42.406939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.133 [2024-04-24 10:28:42.406945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.133 [2024-04-24 10:28:42.406960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.133 qpair failed and we were unable to recover it. 00:33:29.393 [2024-04-24 10:28:42.416925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.393 [2024-04-24 10:28:42.417005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.393 [2024-04-24 10:28:42.417022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.393 [2024-04-24 10:28:42.417028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.393 [2024-04-24 10:28:42.417035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.393 [2024-04-24 10:28:42.417049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-04-24 10:28:42.426942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.393 [2024-04-24 10:28:42.427019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.393 [2024-04-24 10:28:42.427036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.393 [2024-04-24 10:28:42.427042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.393 [2024-04-24 10:28:42.427048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.393 [2024-04-24 10:28:42.427064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-04-24 10:28:42.436940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.393 [2024-04-24 10:28:42.437018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.393 [2024-04-24 10:28:42.437035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.393 [2024-04-24 10:28:42.437041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.437048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.437062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.446959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.447049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.447065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.447075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.447081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.447096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.456922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.457012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.457028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.457035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.457040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.457056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.467031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.467117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.467136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.467143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.467149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.467165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.477078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.477149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.477164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.477171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.477177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.477193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.487089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.487163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.487180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.487187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.487193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.487209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.497153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.497266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.497282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.497290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.497296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.497312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.507150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.507224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.507241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.507248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.507254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.507272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.517183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.517260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.517276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.517282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.517288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.517303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.527207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.527283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.527298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.527305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.527311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.527326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.537178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.537256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.537272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.537279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.537285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.537300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.547286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.547366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.547382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.547388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.547394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.547410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.557234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.557312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.557331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.557338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.557344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.557359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.567327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.567443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.567459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.567466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.567472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.567487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.577364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.577487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.577503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.577510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.577516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.577531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-04-24 10:28:42.587359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.394 [2024-04-24 10:28:42.587439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.394 [2024-04-24 10:28:42.587454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.394 [2024-04-24 10:28:42.587462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.394 [2024-04-24 10:28:42.587468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.394 [2024-04-24 10:28:42.587483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-04-24 10:28:42.597425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.395 [2024-04-24 10:28:42.597504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.395 [2024-04-24 10:28:42.597520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.395 [2024-04-24 10:28:42.597527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.395 [2024-04-24 10:28:42.597533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.395 [2024-04-24 10:28:42.597553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-04-24 10:28:42.607488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.395 [2024-04-24 10:28:42.607564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.395 [2024-04-24 10:28:42.607580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.395 [2024-04-24 10:28:42.607586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.395 [2024-04-24 10:28:42.607593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.395 [2024-04-24 10:28:42.607607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-04-24 10:28:42.617410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.395 [2024-04-24 10:28:42.617492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.395 [2024-04-24 10:28:42.617508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.395 [2024-04-24 10:28:42.617515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.395 [2024-04-24 10:28:42.617521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.395 [2024-04-24 10:28:42.617536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-04-24 10:28:42.627495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.395 [2024-04-24 10:28:42.627653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.395 [2024-04-24 10:28:42.627668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.395 [2024-04-24 10:28:42.627675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.395 [2024-04-24 10:28:42.627681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.395 [2024-04-24 10:28:42.627696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-04-24 10:28:42.637554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.395 [2024-04-24 10:28:42.637632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.395 [2024-04-24 10:28:42.637648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.395 [2024-04-24 10:28:42.637655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.395 [2024-04-24 10:28:42.637661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.395 [2024-04-24 10:28:42.637676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-04-24 10:28:42.647517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.395 [2024-04-24 10:28:42.647592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.395 [2024-04-24 10:28:42.647612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.395 [2024-04-24 10:28:42.647619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.395 [2024-04-24 10:28:42.647625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.395 [2024-04-24 10:28:42.647639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-04-24 10:28:42.657584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.395 [2024-04-24 10:28:42.657662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.395 [2024-04-24 10:28:42.657677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.395 [2024-04-24 10:28:42.657684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.395 [2024-04-24 10:28:42.657691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.395 [2024-04-24 10:28:42.657706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-04-24 10:28:42.667685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.395 [2024-04-24 10:28:42.667759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.395 [2024-04-24 10:28:42.667775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.395 [2024-04-24 10:28:42.667782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.395 [2024-04-24 10:28:42.667788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.395 [2024-04-24 10:28:42.667803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.655 [2024-04-24 10:28:42.677672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.655 [2024-04-24 10:28:42.677755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.655 [2024-04-24 10:28:42.677771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.655 [2024-04-24 10:28:42.677778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.655 [2024-04-24 10:28:42.677783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.655 [2024-04-24 10:28:42.677798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.655 qpair failed and we were unable to recover it. 00:33:29.655 [2024-04-24 10:28:42.687676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.655 [2024-04-24 10:28:42.687755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.655 [2024-04-24 10:28:42.687770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.655 [2024-04-24 10:28:42.687777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.655 [2024-04-24 10:28:42.687787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.655 [2024-04-24 10:28:42.687803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.655 qpair failed and we were unable to recover it. 00:33:29.655 [2024-04-24 10:28:42.697731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.697812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.697828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.697835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.697841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.697856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.707705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.707794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.707810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.707817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.707824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.707839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.717784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.717860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.717877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.717883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.717889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.717905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.727852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.727922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.727938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.727945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.727951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.727966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.737779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.737859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.737875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.737882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.737888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.737903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.747867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.747943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.747958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.747965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.747971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.747986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.757914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.757988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.758004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.758011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.758017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.758032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.767981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.768106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.768123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.768130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.768136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.768151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.777956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.778036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.778051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.778062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.778068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.778087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.788022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.788106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.788122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.788129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.788136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.788151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.798020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.798150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.798166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.798173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.798179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.798194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.808055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.808135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.808151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.808158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.808164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.808179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.818105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.818186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.818202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.818209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.656 [2024-04-24 10:28:42.818215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.656 [2024-04-24 10:28:42.818231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-04-24 10:28:42.828060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.656 [2024-04-24 10:28:42.828140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.656 [2024-04-24 10:28:42.828156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.656 [2024-04-24 10:28:42.828162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.828168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.828184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.838142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.838222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.838237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.838244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.838250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.838265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.848182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.848319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.848335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.848341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.848347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.848363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.858230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.858307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.858323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.858329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.858336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.858351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.868262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.868387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.868403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.868412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.868418] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.868433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.878292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.878369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.878385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.878391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.878398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.878412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.888289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.888365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.888381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.888388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.888394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.888409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.898355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.898429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.898445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.898452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.898458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.898473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.908401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.908506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.908522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.908529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.908535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.908550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.918408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.918488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.918504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.918510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.918517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.918531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-04-24 10:28:42.928423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.657 [2024-04-24 10:28:42.928507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.657 [2024-04-24 10:28:42.928523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.657 [2024-04-24 10:28:42.928529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.657 [2024-04-24 10:28:42.928536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.657 [2024-04-24 10:28:42.928551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.917 [2024-04-24 10:28:42.938462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.917 [2024-04-24 10:28:42.938541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.917 [2024-04-24 10:28:42.938557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.917 [2024-04-24 10:28:42.938564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.917 [2024-04-24 10:28:42.938570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.917 [2024-04-24 10:28:42.938585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.917 qpair failed and we were unable to recover it. 00:33:29.917 [2024-04-24 10:28:42.948407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.917 [2024-04-24 10:28:42.948504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.917 [2024-04-24 10:28:42.948520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.917 [2024-04-24 10:28:42.948527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:42.948533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:42.948549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:42.958514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:42.958594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:42.958615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:42.958622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:42.958628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:42.958643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:42.968533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:42.968607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:42.968623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:42.968630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:42.968636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:42.968651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:42.978573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:42.978645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:42.978662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:42.978668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:42.978675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:42.978690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:42.988624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:42.988712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:42.988728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:42.988735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:42.988741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:42.988756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:42.998651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:42.998729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:42.998744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:42.998751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:42.998757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:42.998775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:43.008630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:43.008707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:43.008724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:43.008730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:43.008737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:43.008751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:43.018698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:43.018780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:43.018796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:43.018803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:43.018809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:43.018824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:43.028728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:43.028811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:43.028826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:43.028833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:43.028839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:43.028854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:43.038691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:43.038763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:43.038778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:43.038785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:43.038791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:43.038806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:43.048749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:43.048824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.918 [2024-04-24 10:28:43.048843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.918 [2024-04-24 10:28:43.048850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.918 [2024-04-24 10:28:43.048856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.918 [2024-04-24 10:28:43.048870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.918 qpair failed and we were unable to recover it. 00:33:29.918 [2024-04-24 10:28:43.058831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.918 [2024-04-24 10:28:43.058905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.058921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.058928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.058934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.058949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.068833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.069016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.069033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.069040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.069046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.069061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.078871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.078948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.078965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.078971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.078978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.078993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.088895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.088974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.088991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.088997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.089004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.089022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.098944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.099023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.099040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.099047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.099053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.099068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.108967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.109073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.109088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.109095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.109101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.109116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.118985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.119065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.119086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.119093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.119099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.119115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.129001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.129082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.129098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.129105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.129112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.129127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.139054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.139135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.139154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.139161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.139167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.139183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.149025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.149108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.149124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.149131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.149136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.149151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.159035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.159111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.159128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.159134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.159141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.159157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.169133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.169225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.169241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.169247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.169254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.169269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.179176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.179247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.179263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.179270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.179279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.919 [2024-04-24 10:28:43.179294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.919 qpair failed and we were unable to recover it. 00:33:29.919 [2024-04-24 10:28:43.189204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:29.919 [2024-04-24 10:28:43.189276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:29.919 [2024-04-24 10:28:43.189292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:29.919 [2024-04-24 10:28:43.189298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:29.919 [2024-04-24 10:28:43.189305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:29.920 [2024-04-24 10:28:43.189319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:29.920 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.199210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.199289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.199306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.199313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.199319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.199334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.209287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.209364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.209381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.209388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.209394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.209409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.219324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.219435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.219451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.219459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.219465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.219480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.229348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.229456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.229472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.229479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.229485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.229500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.239350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.239429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.239445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.239452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.239458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.239473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.249372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.249447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.249462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.249469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.249475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.249490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.259405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.259475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.259491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.259498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.259504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.259519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.269436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.269508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.269523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.269530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.269540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.269555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.279474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.279543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.279559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.279566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.279572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.279587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.289490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.289561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.289577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.289584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.289590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.289605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.299528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.180 [2024-04-24 10:28:43.299608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.180 [2024-04-24 10:28:43.299624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.180 [2024-04-24 10:28:43.299631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.180 [2024-04-24 10:28:43.299637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.180 [2024-04-24 10:28:43.299652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.180 qpair failed and we were unable to recover it. 00:33:30.180 [2024-04-24 10:28:43.309562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.309642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.309658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.309666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.309672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:30.181 [2024-04-24 10:28:43.309687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.319605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.319698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.319727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.319739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.319749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.319772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.329620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.329695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.329713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.329720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.329726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.329742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.339661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.339769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.339787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.339794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.339801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.339816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.349591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.349671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.349688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.349695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.349702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.349716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.359652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.359721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.359738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.359749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.359755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.359770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.369732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.369808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.369825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.369833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.369839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.369855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.379673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.379751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.379769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.379776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.379782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.379797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.389780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.389857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.389875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.389882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.389888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.389903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.399774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.399860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.399877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.399884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.399891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.399905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.409852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.409923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.409940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.409947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.409953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.409968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.419934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.420037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.420056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.420063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.420074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.420090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.429914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.429990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.430007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.430015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.430021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.181 [2024-04-24 10:28:43.430036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.181 qpair failed and we were unable to recover it. 00:33:30.181 [2024-04-24 10:28:43.439950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.181 [2024-04-24 10:28:43.440026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.181 [2024-04-24 10:28:43.440043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.181 [2024-04-24 10:28:43.440050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.181 [2024-04-24 10:28:43.440056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.182 [2024-04-24 10:28:43.440075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.182 qpair failed and we were unable to recover it. 00:33:30.182 [2024-04-24 10:28:43.450020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.182 [2024-04-24 10:28:43.450100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.182 [2024-04-24 10:28:43.450118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.182 [2024-04-24 10:28:43.450128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.182 [2024-04-24 10:28:43.450134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.182 [2024-04-24 10:28:43.450149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.182 qpair failed and we were unable to recover it. 00:33:30.442 [2024-04-24 10:28:43.459951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.442 [2024-04-24 10:28:43.460031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.442 [2024-04-24 10:28:43.460048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.442 [2024-04-24 10:28:43.460055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.442 [2024-04-24 10:28:43.460061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.442 [2024-04-24 10:28:43.460086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.442 qpair failed and we were unable to recover it. 00:33:30.442 [2024-04-24 10:28:43.469998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.442 [2024-04-24 10:28:43.470078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.442 [2024-04-24 10:28:43.470096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.442 [2024-04-24 10:28:43.470103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.442 [2024-04-24 10:28:43.470109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.442 [2024-04-24 10:28:43.470125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.442 qpair failed and we were unable to recover it. 00:33:30.442 [2024-04-24 10:28:43.480069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.442 [2024-04-24 10:28:43.480147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.442 [2024-04-24 10:28:43.480164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.442 [2024-04-24 10:28:43.480171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.442 [2024-04-24 10:28:43.480178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.442 [2024-04-24 10:28:43.480193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.442 qpair failed and we were unable to recover it. 00:33:30.442 [2024-04-24 10:28:43.490096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.442 [2024-04-24 10:28:43.490170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.442 [2024-04-24 10:28:43.490188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.442 [2024-04-24 10:28:43.490195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.442 [2024-04-24 10:28:43.490201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.442 [2024-04-24 10:28:43.490216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.442 qpair failed and we were unable to recover it. 00:33:30.442 [2024-04-24 10:28:43.500101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.442 [2024-04-24 10:28:43.500245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.442 [2024-04-24 10:28:43.500262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.442 [2024-04-24 10:28:43.500270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.442 [2024-04-24 10:28:43.500276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.442 [2024-04-24 10:28:43.500292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.442 qpair failed and we were unable to recover it. 00:33:30.442 [2024-04-24 10:28:43.510150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.442 [2024-04-24 10:28:43.510219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.442 [2024-04-24 10:28:43.510236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.442 [2024-04-24 10:28:43.510243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.442 [2024-04-24 10:28:43.510249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.442 [2024-04-24 10:28:43.510264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.442 qpair failed and we were unable to recover it. 00:33:30.442 [2024-04-24 10:28:43.520172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.442 [2024-04-24 10:28:43.520244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.520262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.520269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.520275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.520290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.530213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.530291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.530308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.530315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.530320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.530336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.540243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.540318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.540336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.540346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.540352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.540367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.550264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.550337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.550354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.550361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.550367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.550382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.560214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.560293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.560310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.560317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.560323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.560338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.570308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.570382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.570399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.570406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.570412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.570426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.580351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.580425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.580441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.580448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.580454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.580469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.590387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.590455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.590472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.590480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.590486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.590500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.600417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.600500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.600517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.600524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.600530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.600545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.610438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.610525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.610542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.610549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.610555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.610570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.620474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.620550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.620567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.620574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.620580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.620595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.630504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.630585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.630602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.630612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.630618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.630634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.640564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.640674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.640691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.640698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.640704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.443 [2024-04-24 10:28:43.640718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.443 qpair failed and we were unable to recover it. 00:33:30.443 [2024-04-24 10:28:43.650489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.443 [2024-04-24 10:28:43.650590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.443 [2024-04-24 10:28:43.650607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.443 [2024-04-24 10:28:43.650614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.443 [2024-04-24 10:28:43.650621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.444 [2024-04-24 10:28:43.650635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.444 qpair failed and we were unable to recover it. 00:33:30.444 [2024-04-24 10:28:43.660601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.444 [2024-04-24 10:28:43.660682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.444 [2024-04-24 10:28:43.660699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.444 [2024-04-24 10:28:43.660707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.444 [2024-04-24 10:28:43.660713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.444 [2024-04-24 10:28:43.660727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.444 qpair failed and we were unable to recover it. 00:33:30.444 [2024-04-24 10:28:43.670523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.444 [2024-04-24 10:28:43.670595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.444 [2024-04-24 10:28:43.670613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.444 [2024-04-24 10:28:43.670620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.444 [2024-04-24 10:28:43.670626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.444 [2024-04-24 10:28:43.670641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.444 qpair failed and we were unable to recover it. 00:33:30.444 [2024-04-24 10:28:43.680557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.444 [2024-04-24 10:28:43.680637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.444 [2024-04-24 10:28:43.680654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.444 [2024-04-24 10:28:43.680662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.444 [2024-04-24 10:28:43.680668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.444 [2024-04-24 10:28:43.680683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.444 qpair failed and we were unable to recover it. 00:33:30.444 [2024-04-24 10:28:43.690592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.444 [2024-04-24 10:28:43.690668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.444 [2024-04-24 10:28:43.690685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.444 [2024-04-24 10:28:43.690692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.444 [2024-04-24 10:28:43.690699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.444 [2024-04-24 10:28:43.690713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.444 qpair failed and we were unable to recover it. 00:33:30.444 [2024-04-24 10:28:43.700686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.444 [2024-04-24 10:28:43.700763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.444 [2024-04-24 10:28:43.700780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.444 [2024-04-24 10:28:43.700788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.444 [2024-04-24 10:28:43.700794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.444 [2024-04-24 10:28:43.700808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.444 qpair failed and we were unable to recover it. 00:33:30.444 [2024-04-24 10:28:43.710763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.444 [2024-04-24 10:28:43.710871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.444 [2024-04-24 10:28:43.710888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.444 [2024-04-24 10:28:43.710895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.444 [2024-04-24 10:28:43.710901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.444 [2024-04-24 10:28:43.710916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.444 qpair failed and we were unable to recover it. 00:33:30.704 [2024-04-24 10:28:43.720763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.704 [2024-04-24 10:28:43.720836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.704 [2024-04-24 10:28:43.720854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.704 [2024-04-24 10:28:43.720864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.704 [2024-04-24 10:28:43.720871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.704 [2024-04-24 10:28:43.720885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-04-24 10:28:43.730776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.704 [2024-04-24 10:28:43.730851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.704 [2024-04-24 10:28:43.730868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.704 [2024-04-24 10:28:43.730875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.704 [2024-04-24 10:28:43.730882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.704 [2024-04-24 10:28:43.730896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-04-24 10:28:43.740822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.740897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.740914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.740921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.740928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.740942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.750830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.750907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.750924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.750931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.750937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.750952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.760853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.760938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.760955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.760962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.760968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.760983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.770895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.770971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.770989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.770996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.771002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.771017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.780939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.781012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.781029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.781036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.781042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.781057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.790957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.791029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.791046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.791054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.791060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.791078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.800974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.801060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.801080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.801088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.801094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.801108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.811013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.811092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.811112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.811119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.811125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.811140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.821058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.821166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.821184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.821191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.821197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.821212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.831082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.831159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.831176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.831184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.831190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.831205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.841022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.841097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.841115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.841121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.841128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.841142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.851147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.851221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.851238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.851246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.851252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.851267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.861178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.861294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.861311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.861318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.861324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.705 [2024-04-24 10:28:43.861339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-04-24 10:28:43.871215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.705 [2024-04-24 10:28:43.871299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.705 [2024-04-24 10:28:43.871316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.705 [2024-04-24 10:28:43.871323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.705 [2024-04-24 10:28:43.871329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.871344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.881312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.881386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.881403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.881411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.881419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.881434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.891291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.891366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.891382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.891390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.891396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.891410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.901310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.901390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.901410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.901417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.901425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.901440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.911252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.911332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.911349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.911356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.911362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.911376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.921331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.921407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.921425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.921432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.921438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.921454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.931357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.931434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.931451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.931458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.931464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.931479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.941386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.941463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.941480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.941487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.941494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.941514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.951389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.951461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.951479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.951486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.951493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.951507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.961415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.961492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.961509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.961516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.961522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.961537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.971464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.706 [2024-04-24 10:28:43.971589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.706 [2024-04-24 10:28:43.971607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.706 [2024-04-24 10:28:43.971615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.706 [2024-04-24 10:28:43.971621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.706 [2024-04-24 10:28:43.971636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-04-24 10:28:43.981535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.967 [2024-04-24 10:28:43.981612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.967 [2024-04-24 10:28:43.981629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.967 [2024-04-24 10:28:43.981636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.967 [2024-04-24 10:28:43.981642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.967 [2024-04-24 10:28:43.981657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.967 qpair failed and we were unable to recover it. 00:33:30.967 [2024-04-24 10:28:43.991445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.967 [2024-04-24 10:28:43.991522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.967 [2024-04-24 10:28:43.991544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.967 [2024-04-24 10:28:43.991551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.967 [2024-04-24 10:28:43.991557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.967 [2024-04-24 10:28:43.991572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.967 qpair failed and we were unable to recover it. 00:33:30.967 [2024-04-24 10:28:44.001486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.967 [2024-04-24 10:28:44.001559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.967 [2024-04-24 10:28:44.001576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.967 [2024-04-24 10:28:44.001583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.967 [2024-04-24 10:28:44.001589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.967 [2024-04-24 10:28:44.001604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.967 qpair failed and we were unable to recover it. 00:33:30.967 [2024-04-24 10:28:44.011595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.967 [2024-04-24 10:28:44.011671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.967 [2024-04-24 10:28:44.011688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.967 [2024-04-24 10:28:44.011695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.967 [2024-04-24 10:28:44.011701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.967 [2024-04-24 10:28:44.011716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.967 qpair failed and we were unable to recover it. 00:33:30.967 [2024-04-24 10:28:44.021617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.967 [2024-04-24 10:28:44.021738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.967 [2024-04-24 10:28:44.021756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.967 [2024-04-24 10:28:44.021763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.967 [2024-04-24 10:28:44.021769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.967 [2024-04-24 10:28:44.021784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.967 qpair failed and we were unable to recover it. 00:33:30.967 [2024-04-24 10:28:44.031636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.967 [2024-04-24 10:28:44.031704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.967 [2024-04-24 10:28:44.031722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.967 [2024-04-24 10:28:44.031729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.967 [2024-04-24 10:28:44.031736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.967 [2024-04-24 10:28:44.031754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.967 qpair failed and we were unable to recover it. 00:33:30.967 [2024-04-24 10:28:44.041628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.967 [2024-04-24 10:28:44.041713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.967 [2024-04-24 10:28:44.041730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.967 [2024-04-24 10:28:44.041737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.967 [2024-04-24 10:28:44.041744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.967 [2024-04-24 10:28:44.041758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.967 qpair failed and we were unable to recover it. 00:33:30.967 [2024-04-24 10:28:44.051631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.967 [2024-04-24 10:28:44.051709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.967 [2024-04-24 10:28:44.051726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.051733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.051739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.051754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.061665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.061744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.061761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.061768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.061774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.061789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.071694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.071766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.071783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.071790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.071796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.071811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.081756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.081828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.081849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.081857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.081863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.081878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.091847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.091921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.091937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.091944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.091951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.091965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.101826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.101908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.101926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.101933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.101939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.101954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.111846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.111923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.111938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.111945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.111951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.111965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.121856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.121932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.121949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.121956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.121964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.121983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.131863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.131939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.131956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.131963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.131969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.131984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.141918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.141998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.142015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.142022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.142028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.142043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.151945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.152023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.152041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.152048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.152054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.152073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.161947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.162127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.968 [2024-04-24 10:28:44.162144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.968 [2024-04-24 10:28:44.162151] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.968 [2024-04-24 10:28:44.162165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.968 [2024-04-24 10:28:44.162185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.968 qpair failed and we were unable to recover it. 00:33:30.968 [2024-04-24 10:28:44.172044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.968 [2024-04-24 10:28:44.172189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.969 [2024-04-24 10:28:44.172210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.969 [2024-04-24 10:28:44.172218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.969 [2024-04-24 10:28:44.172224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.969 [2024-04-24 10:28:44.172240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.969 qpair failed and we were unable to recover it. 00:33:30.969 [2024-04-24 10:28:44.182078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.969 [2024-04-24 10:28:44.182165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.969 [2024-04-24 10:28:44.182183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.969 [2024-04-24 10:28:44.182190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.969 [2024-04-24 10:28:44.182197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.969 [2024-04-24 10:28:44.182211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.969 qpair failed and we were unable to recover it. 00:33:30.969 [2024-04-24 10:28:44.192106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.969 [2024-04-24 10:28:44.192246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.969 [2024-04-24 10:28:44.192263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.969 [2024-04-24 10:28:44.192270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.969 [2024-04-24 10:28:44.192277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.969 [2024-04-24 10:28:44.192292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.969 qpair failed and we were unable to recover it. 00:33:30.969 [2024-04-24 10:28:44.202164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.969 [2024-04-24 10:28:44.202241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.969 [2024-04-24 10:28:44.202258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.969 [2024-04-24 10:28:44.202265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.969 [2024-04-24 10:28:44.202272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.969 [2024-04-24 10:28:44.202286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.969 qpair failed and we were unable to recover it. 00:33:30.969 [2024-04-24 10:28:44.212178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.969 [2024-04-24 10:28:44.212253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.969 [2024-04-24 10:28:44.212270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.969 [2024-04-24 10:28:44.212278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.969 [2024-04-24 10:28:44.212284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.969 [2024-04-24 10:28:44.212302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.969 qpair failed and we were unable to recover it. 00:33:30.969 [2024-04-24 10:28:44.222217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.969 [2024-04-24 10:28:44.222290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.969 [2024-04-24 10:28:44.222307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.969 [2024-04-24 10:28:44.222314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.969 [2024-04-24 10:28:44.222320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.969 [2024-04-24 10:28:44.222335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.969 qpair failed and we were unable to recover it. 00:33:30.969 [2024-04-24 10:28:44.232252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.969 [2024-04-24 10:28:44.232332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.969 [2024-04-24 10:28:44.232349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.969 [2024-04-24 10:28:44.232356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.969 [2024-04-24 10:28:44.232362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.969 [2024-04-24 10:28:44.232377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.969 qpair failed and we were unable to recover it. 00:33:30.969 [2024-04-24 10:28:44.242277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:30.969 [2024-04-24 10:28:44.242355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:30.969 [2024-04-24 10:28:44.242371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:30.969 [2024-04-24 10:28:44.242378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:30.969 [2024-04-24 10:28:44.242384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:30.969 [2024-04-24 10:28:44.242399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:30.969 qpair failed and we were unable to recover it. 00:33:31.230 [2024-04-24 10:28:44.252301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.230 [2024-04-24 10:28:44.252378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.230 [2024-04-24 10:28:44.252395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.230 [2024-04-24 10:28:44.252401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.230 [2024-04-24 10:28:44.252407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.230 [2024-04-24 10:28:44.252422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.230 qpair failed and we were unable to recover it. 00:33:31.230 [2024-04-24 10:28:44.262261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.230 [2024-04-24 10:28:44.262336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.230 [2024-04-24 10:28:44.262356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.230 [2024-04-24 10:28:44.262363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.230 [2024-04-24 10:28:44.262369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.230 [2024-04-24 10:28:44.262384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.230 qpair failed and we were unable to recover it. 00:33:31.230 [2024-04-24 10:28:44.272288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.230 [2024-04-24 10:28:44.272356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.272374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.272381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.272387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.272403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.282299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.282371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.282389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.282396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.282402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.282417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.292437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.292515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.292532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.292539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.292545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.292560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.302389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.302483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.302501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.302508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.302514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.302533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.312488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.312562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.312580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.312587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.312593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.312608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.322505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.322579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.322597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.322604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.322610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.322625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.332553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.332630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.332647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.332654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.332661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.332676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.342571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.342650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.342667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.342674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.342680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.342695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.352610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.352688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.352709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.352716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.352722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.352737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.362635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.362706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.362722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.362730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.362736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.362751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.372680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.372757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.372774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.372781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.372787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.372802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.382692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.382836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.382853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.382860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.382866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.382881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.392676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.392755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.392772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.392779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.392789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.231 [2024-04-24 10:28:44.392804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.231 qpair failed and we were unable to recover it. 00:33:31.231 [2024-04-24 10:28:44.402745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.231 [2024-04-24 10:28:44.402814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.231 [2024-04-24 10:28:44.402832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.231 [2024-04-24 10:28:44.402839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.231 [2024-04-24 10:28:44.402845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.402860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.412873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.412954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.412973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.412980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.412986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.413002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.422813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.422892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.422910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.422917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.422924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.422939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.432843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.432965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.432982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.432990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.432996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.433010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.442876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.442945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.442966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.442973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.442979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.442993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.452920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.452996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.453012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.453019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.453025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.453040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.462860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.462947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.462964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.462971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.462977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.462992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.472944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.473021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.473038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.473045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.473051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.473066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.482994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.483074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.483090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.483097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.483107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.483123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.493028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.493117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.493134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.493141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.493147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.493162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.232 [2024-04-24 10:28:44.503041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.232 [2024-04-24 10:28:44.503125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.232 [2024-04-24 10:28:44.503142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.232 [2024-04-24 10:28:44.503149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.232 [2024-04-24 10:28:44.503155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.232 [2024-04-24 10:28:44.503170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.232 qpair failed and we were unable to recover it. 00:33:31.493 [2024-04-24 10:28:44.513093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.493 [2024-04-24 10:28:44.513168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.493 [2024-04-24 10:28:44.513186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.493 [2024-04-24 10:28:44.513192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.493 [2024-04-24 10:28:44.513199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.493 [2024-04-24 10:28:44.513213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.493 qpair failed and we were unable to recover it. 00:33:31.493 [2024-04-24 10:28:44.523115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.493 [2024-04-24 10:28:44.523193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.493 [2024-04-24 10:28:44.523210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.493 [2024-04-24 10:28:44.523217] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.493 [2024-04-24 10:28:44.523224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.493 [2024-04-24 10:28:44.523238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.493 qpair failed and we were unable to recover it. 00:33:31.493 [2024-04-24 10:28:44.533126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.493 [2024-04-24 10:28:44.533217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.493 [2024-04-24 10:28:44.533235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.493 [2024-04-24 10:28:44.533242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.493 [2024-04-24 10:28:44.533248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.493 [2024-04-24 10:28:44.533262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.493 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.543217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.543327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.543343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.543350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.543356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.543370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.553195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.553272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.553289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.553296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.553302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.553316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.563243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.563340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.563356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.563363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.563370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.563389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.573224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.573298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.573315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.573322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.573331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.573346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.583279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.583351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.583368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.583375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.583381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.583396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.593323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.593411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.593428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.593435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.593441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.593455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.603349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.603428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.603445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.603452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.603458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.603473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.613383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.613464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.613481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.613488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.613494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.613509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.623401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.623481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.623499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.623506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.623512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.623527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.633512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.633594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.633611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.633619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.633625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.633640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.643484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.643561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.643578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.643585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.643591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.494 [2024-04-24 10:28:44.643606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.494 qpair failed and we were unable to recover it. 00:33:31.494 [2024-04-24 10:28:44.653584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.494 [2024-04-24 10:28:44.653665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.494 [2024-04-24 10:28:44.653682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.494 [2024-04-24 10:28:44.653689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.494 [2024-04-24 10:28:44.653696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.653710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.663533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.663607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.663625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.663632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.663642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.663658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.673551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.673625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.673642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.673649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.673656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.673670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.683593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.683670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.683687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.683694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.683700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.683713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.693616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.693695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.693711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.693718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.693725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.693739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.703650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.703721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.703738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.703745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.703751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.703766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.713686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.713794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.713812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.713819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.713825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.713839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.723699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.723773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.723790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.723797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.723803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.723818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.733728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.733808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.733825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.733832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.733839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.733853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.743761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.743836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.743854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.743860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.743866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.743881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.753797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.753875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.753892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.753900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.753909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.753924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.495 [2024-04-24 10:28:44.763831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.495 [2024-04-24 10:28:44.763912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.495 [2024-04-24 10:28:44.763929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.495 [2024-04-24 10:28:44.763937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.495 [2024-04-24 10:28:44.763942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.495 [2024-04-24 10:28:44.763958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.495 qpair failed and we were unable to recover it. 00:33:31.756 [2024-04-24 10:28:44.773796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.756 [2024-04-24 10:28:44.773876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.756 [2024-04-24 10:28:44.773893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.756 [2024-04-24 10:28:44.773900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.756 [2024-04-24 10:28:44.773906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.756 [2024-04-24 10:28:44.773920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-04-24 10:28:44.783863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.756 [2024-04-24 10:28:44.783942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.756 [2024-04-24 10:28:44.783959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.756 [2024-04-24 10:28:44.783965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.756 [2024-04-24 10:28:44.783971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.756 [2024-04-24 10:28:44.783986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-04-24 10:28:44.793900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.756 [2024-04-24 10:28:44.793979] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.756 [2024-04-24 10:28:44.793996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.756 [2024-04-24 10:28:44.794003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.756 [2024-04-24 10:28:44.794010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.756 [2024-04-24 10:28:44.794024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-04-24 10:28:44.803916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.803995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.804012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.804019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.804025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.757 [2024-04-24 10:28:44.804039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.813968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.814043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.814061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.814068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.814078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.757 [2024-04-24 10:28:44.814093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.823995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.824076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.824094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.824101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.824107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.757 [2024-04-24 10:28:44.824123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.834012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.834095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.834113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.834120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.834126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.757 [2024-04-24 10:28:44.834141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.844039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.844122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.844139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.844146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.844158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.757 [2024-04-24 10:28:44.844173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.854101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.854176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.854193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.854200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.854206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.757 [2024-04-24 10:28:44.854220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.864132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.864214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.864231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.864238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.864245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.757 [2024-04-24 10:28:44.864259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.874129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.874206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.874225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.874232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.874238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.757 [2024-04-24 10:28:44.874253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.884158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.884235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.884252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.884259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.884265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:31.757 [2024-04-24 10:28:44.884280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.894148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.894235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.894261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.894271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.894279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.757 [2024-04-24 10:28:44.894300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.904191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.904271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.904290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.904297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.904304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.757 [2024-04-24 10:28:44.904319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.914339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.914417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.914434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.914441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.914448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.757 [2024-04-24 10:28:44.914464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.924282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.924362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.924379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.924387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.924393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.757 [2024-04-24 10:28:44.924408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-04-24 10:28:44.934311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.757 [2024-04-24 10:28:44.934388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.757 [2024-04-24 10:28:44.934405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.757 [2024-04-24 10:28:44.934415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.757 [2024-04-24 10:28:44.934421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:44.934436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-04-24 10:28:44.944339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.758 [2024-04-24 10:28:44.944425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.758 [2024-04-24 10:28:44.944442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.758 [2024-04-24 10:28:44.944449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.758 [2024-04-24 10:28:44.944455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:44.944471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-04-24 10:28:44.954366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.758 [2024-04-24 10:28:44.954446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.758 [2024-04-24 10:28:44.954463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.758 [2024-04-24 10:28:44.954470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.758 [2024-04-24 10:28:44.954476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:44.954492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-04-24 10:28:44.964376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.758 [2024-04-24 10:28:44.964469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.758 [2024-04-24 10:28:44.964486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.758 [2024-04-24 10:28:44.964493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.758 [2024-04-24 10:28:44.964499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:44.964514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-04-24 10:28:44.974426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.758 [2024-04-24 10:28:44.974506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.758 [2024-04-24 10:28:44.974522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.758 [2024-04-24 10:28:44.974530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.758 [2024-04-24 10:28:44.974536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:44.974551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-04-24 10:28:44.984455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.758 [2024-04-24 10:28:44.984535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.758 [2024-04-24 10:28:44.984551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.758 [2024-04-24 10:28:44.984558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.758 [2024-04-24 10:28:44.984565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:44.984580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-04-24 10:28:44.994492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.758 [2024-04-24 10:28:44.994569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.758 [2024-04-24 10:28:44.994586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.758 [2024-04-24 10:28:44.994593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.758 [2024-04-24 10:28:44.994600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:44.994615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-04-24 10:28:45.004514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.758 [2024-04-24 10:28:45.004590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.758 [2024-04-24 10:28:45.004606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.758 [2024-04-24 10:28:45.004613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.758 [2024-04-24 10:28:45.004619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:45.004634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-04-24 10:28:45.014550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.758 [2024-04-24 10:28:45.014627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.758 [2024-04-24 10:28:45.014644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.758 [2024-04-24 10:28:45.014650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.758 [2024-04-24 10:28:45.014657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:45.014672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-04-24 10:28:45.024570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.758 [2024-04-24 10:28:45.024651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.758 [2024-04-24 10:28:45.024669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.758 [2024-04-24 10:28:45.024683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.758 [2024-04-24 10:28:45.024689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:31.758 [2024-04-24 10:28:45.024705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.758 qpair failed and we were unable to recover it. 00:33:32.019 [2024-04-24 10:28:45.034600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.019 [2024-04-24 10:28:45.034677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.019 [2024-04-24 10:28:45.034693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.019 [2024-04-24 10:28:45.034700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.019 [2024-04-24 10:28:45.034706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.019 [2024-04-24 10:28:45.034721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.019 qpair failed and we were unable to recover it. 00:33:32.019 [2024-04-24 10:28:45.044635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.019 [2024-04-24 10:28:45.044713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.019 [2024-04-24 10:28:45.044730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.019 [2024-04-24 10:28:45.044737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.019 [2024-04-24 10:28:45.044744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.019 [2024-04-24 10:28:45.044759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.019 qpair failed and we were unable to recover it. 00:33:32.019 [2024-04-24 10:28:45.054679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.019 [2024-04-24 10:28:45.054762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.019 [2024-04-24 10:28:45.054779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.019 [2024-04-24 10:28:45.054785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.019 [2024-04-24 10:28:45.054791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.019 [2024-04-24 10:28:45.054806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.019 qpair failed and we were unable to recover it. 00:33:32.019 [2024-04-24 10:28:45.064690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.019 [2024-04-24 10:28:45.064770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.019 [2024-04-24 10:28:45.064787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.019 [2024-04-24 10:28:45.064794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.019 [2024-04-24 10:28:45.064799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.019 [2024-04-24 10:28:45.064815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.019 qpair failed and we were unable to recover it. 00:33:32.019 [2024-04-24 10:28:45.074724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.019 [2024-04-24 10:28:45.074807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.019 [2024-04-24 10:28:45.074823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.019 [2024-04-24 10:28:45.074830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.019 [2024-04-24 10:28:45.074836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.019 [2024-04-24 10:28:45.074851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.019 qpair failed and we were unable to recover it. 00:33:32.019 [2024-04-24 10:28:45.084742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.019 [2024-04-24 10:28:45.084815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.019 [2024-04-24 10:28:45.084832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.019 [2024-04-24 10:28:45.084839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.084846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.084861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.094794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.094869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.094886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.094893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.094900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.094915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.104792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.104865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.104881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.104888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.104894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.104909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.114834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.114912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.114932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.114938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.114944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.114959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.124869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.124946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.124962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.124969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.124975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.124991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.134952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.135056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.135077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.135085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.135091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.135107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.144920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.145000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.145017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.145024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.145030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.145046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.154942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.155029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.155046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.155053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.155059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.155083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.165034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.165144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.165160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.165167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.165173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.165188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.174996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.175075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.175091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.175098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.175105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.175120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.185041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.185124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.185140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.185147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.185153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.185169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.195066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.195149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.195165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.195171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.195177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.195196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.205083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.205159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.205178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.205185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.205191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.205207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.215076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.020 [2024-04-24 10:28:45.215154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.020 [2024-04-24 10:28:45.215170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.020 [2024-04-24 10:28:45.215177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.020 [2024-04-24 10:28:45.215184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.020 [2024-04-24 10:28:45.215198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.020 qpair failed and we were unable to recover it. 00:33:32.020 [2024-04-24 10:28:45.225172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.021 [2024-04-24 10:28:45.225250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.021 [2024-04-24 10:28:45.225267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.021 [2024-04-24 10:28:45.225274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.021 [2024-04-24 10:28:45.225280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.021 [2024-04-24 10:28:45.225295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.021 qpair failed and we were unable to recover it. 00:33:32.021 [2024-04-24 10:28:45.235250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.021 [2024-04-24 10:28:45.235329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.021 [2024-04-24 10:28:45.235345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.021 [2024-04-24 10:28:45.235351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.021 [2024-04-24 10:28:45.235357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.021 [2024-04-24 10:28:45.235372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.021 qpair failed and we were unable to recover it. 00:33:32.021 [2024-04-24 10:28:45.245275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.021 [2024-04-24 10:28:45.245349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.021 [2024-04-24 10:28:45.245366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.021 [2024-04-24 10:28:45.245372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.021 [2024-04-24 10:28:45.245378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.021 [2024-04-24 10:28:45.245399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.021 qpair failed and we were unable to recover it. 00:33:32.021 [2024-04-24 10:28:45.255261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.021 [2024-04-24 10:28:45.255339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.021 [2024-04-24 10:28:45.255355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.021 [2024-04-24 10:28:45.255362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.021 [2024-04-24 10:28:45.255368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.021 [2024-04-24 10:28:45.255384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.021 qpair failed and we were unable to recover it. 00:33:32.021 [2024-04-24 10:28:45.265286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.021 [2024-04-24 10:28:45.265359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.021 [2024-04-24 10:28:45.265376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.021 [2024-04-24 10:28:45.265384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.021 [2024-04-24 10:28:45.265390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.021 [2024-04-24 10:28:45.265405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.021 qpair failed and we were unable to recover it. 00:33:32.021 [2024-04-24 10:28:45.275330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.021 [2024-04-24 10:28:45.275415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.021 [2024-04-24 10:28:45.275431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.021 [2024-04-24 10:28:45.275439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.021 [2024-04-24 10:28:45.275445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.021 [2024-04-24 10:28:45.275460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.021 qpair failed and we were unable to recover it. 00:33:32.021 [2024-04-24 10:28:45.285386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.021 [2024-04-24 10:28:45.285466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.021 [2024-04-24 10:28:45.285482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.021 [2024-04-24 10:28:45.285489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.021 [2024-04-24 10:28:45.285495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.021 [2024-04-24 10:28:45.285510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.021 qpair failed and we were unable to recover it. 00:33:32.021 [2024-04-24 10:28:45.295416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.021 [2024-04-24 10:28:45.295497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.021 [2024-04-24 10:28:45.295513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.021 [2024-04-24 10:28:45.295520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.021 [2024-04-24 10:28:45.295526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.021 [2024-04-24 10:28:45.295541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.021 qpair failed and we were unable to recover it. 00:33:32.282 [2024-04-24 10:28:45.305455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.282 [2024-04-24 10:28:45.305532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.282 [2024-04-24 10:28:45.305550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.282 [2024-04-24 10:28:45.305557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.282 [2024-04-24 10:28:45.305563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.282 [2024-04-24 10:28:45.305579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.282 qpair failed and we were unable to recover it. 00:33:32.282 [2024-04-24 10:28:45.315412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.282 [2024-04-24 10:28:45.315498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.282 [2024-04-24 10:28:45.315515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.282 [2024-04-24 10:28:45.315522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.282 [2024-04-24 10:28:45.315528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.282 [2024-04-24 10:28:45.315543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.282 qpair failed and we were unable to recover it. 00:33:32.282 [2024-04-24 10:28:45.325399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.282 [2024-04-24 10:28:45.325476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.282 [2024-04-24 10:28:45.325492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.282 [2024-04-24 10:28:45.325499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.282 [2024-04-24 10:28:45.325505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.282 [2024-04-24 10:28:45.325520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.282 qpair failed and we were unable to recover it. 00:33:32.282 [2024-04-24 10:28:45.335402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.282 [2024-04-24 10:28:45.335480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.282 [2024-04-24 10:28:45.335497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.282 [2024-04-24 10:28:45.335503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.282 [2024-04-24 10:28:45.335514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.282 [2024-04-24 10:28:45.335529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.282 qpair failed and we were unable to recover it. 00:33:32.282 [2024-04-24 10:28:45.345437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.282 [2024-04-24 10:28:45.345515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.282 [2024-04-24 10:28:45.345531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.282 [2024-04-24 10:28:45.345539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.282 [2024-04-24 10:28:45.345545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.282 [2024-04-24 10:28:45.345561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.282 qpair failed and we were unable to recover it. 00:33:32.282 [2024-04-24 10:28:45.355486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.282 [2024-04-24 10:28:45.355609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.282 [2024-04-24 10:28:45.355626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.282 [2024-04-24 10:28:45.355633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.282 [2024-04-24 10:28:45.355639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.282 [2024-04-24 10:28:45.355655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.282 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.365562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.365641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.365658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.365665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.365671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.365687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.375585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.375663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.375678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.375685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.375691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.375706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.385540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.385618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.385633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.385640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.385646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.385665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.395582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.395656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.395672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.395679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.395685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.395700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.405602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.405679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.405695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.405702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.405708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.405724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.415719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.415794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.415810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.415817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.415823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.415839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.425727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.425805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.425822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.425832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.425838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.425854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.435759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.435835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.435851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.435858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.435864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.435879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.445801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.445880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.445897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.445903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.445909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.445924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.455844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.455923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.455939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.455946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.455952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.455968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.465866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.465945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.465962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.465969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.465975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.465991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.475820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.475904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.475921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.475928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.475934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.475950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.485905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.485982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.486000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.283 [2024-04-24 10:28:45.486006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.283 [2024-04-24 10:28:45.486013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.283 [2024-04-24 10:28:45.486029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.283 qpair failed and we were unable to recover it. 00:33:32.283 [2024-04-24 10:28:45.495924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.283 [2024-04-24 10:28:45.495996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.283 [2024-04-24 10:28:45.496013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.284 [2024-04-24 10:28:45.496020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.284 [2024-04-24 10:28:45.496027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.284 [2024-04-24 10:28:45.496042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.284 qpair failed and we were unable to recover it. 00:33:32.284 [2024-04-24 10:28:45.505962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.284 [2024-04-24 10:28:45.506046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.284 [2024-04-24 10:28:45.506062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.284 [2024-04-24 10:28:45.506074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.284 [2024-04-24 10:28:45.506081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.284 [2024-04-24 10:28:45.506096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.284 qpair failed and we were unable to recover it. 00:33:32.284 [2024-04-24 10:28:45.516009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.284 [2024-04-24 10:28:45.516089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.284 [2024-04-24 10:28:45.516106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.284 [2024-04-24 10:28:45.516116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.284 [2024-04-24 10:28:45.516122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.284 [2024-04-24 10:28:45.516138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.284 qpair failed and we were unable to recover it. 00:33:32.284 [2024-04-24 10:28:45.525952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.284 [2024-04-24 10:28:45.526031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.284 [2024-04-24 10:28:45.526047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.284 [2024-04-24 10:28:45.526054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.284 [2024-04-24 10:28:45.526060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.284 [2024-04-24 10:28:45.526080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.284 qpair failed and we were unable to recover it. 00:33:32.284 [2024-04-24 10:28:45.536039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.284 [2024-04-24 10:28:45.536123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.284 [2024-04-24 10:28:45.536140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.284 [2024-04-24 10:28:45.536147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.284 [2024-04-24 10:28:45.536153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.284 [2024-04-24 10:28:45.536168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.284 qpair failed and we were unable to recover it. 00:33:32.284 [2024-04-24 10:28:45.546022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.284 [2024-04-24 10:28:45.546119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.284 [2024-04-24 10:28:45.546136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.284 [2024-04-24 10:28:45.546143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.284 [2024-04-24 10:28:45.546149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.284 [2024-04-24 10:28:45.546165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.284 qpair failed and we were unable to recover it. 00:33:32.284 [2024-04-24 10:28:45.556045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.284 [2024-04-24 10:28:45.556127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.284 [2024-04-24 10:28:45.556143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.284 [2024-04-24 10:28:45.556150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.284 [2024-04-24 10:28:45.556156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.284 [2024-04-24 10:28:45.556171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.284 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.566077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.566158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.566174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.566181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.566187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.566203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.576172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.576252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.576269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.576276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.576282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.576298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.586174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.586253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.586269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.586275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.586282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.586297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.596214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.596291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.596307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.596314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.596321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.596335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.606248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.606335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.606354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.606361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.606367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.606382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.616229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.616376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.616393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.616400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.616405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.616422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.626225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.626303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.626319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.626326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.626333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.626349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.636322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.636393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.636409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.636416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.636422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.636437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.646380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.646454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.646470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.646477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.646483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.646503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.656345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.656419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.656436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.656442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.656448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.656463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.666453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.666532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.666548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.666555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.666561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.666576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.546 qpair failed and we were unable to recover it. 00:33:32.546 [2024-04-24 10:28:45.676393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.546 [2024-04-24 10:28:45.676468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.546 [2024-04-24 10:28:45.676485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.546 [2024-04-24 10:28:45.676491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.546 [2024-04-24 10:28:45.676497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.546 [2024-04-24 10:28:45.676512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.686416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.686486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.686503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.686510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.686516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.686531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.696526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.696602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.696622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.696630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.696635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.696651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.706530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.706618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.706635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.706642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.706648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.706664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.716500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.716570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.716587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.716594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.716599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.716616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.726572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.726664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.726680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.726687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.726693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.726709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.736618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.736694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.736710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.736717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.736723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.736742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.746667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.746743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.746760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.746766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.746772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.746787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.756693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.756776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.756793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.756800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.756806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.756821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.766726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.766801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.766818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.766825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.766831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.766846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.776769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.776844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.776860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.776867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.776873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.776888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.786783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.786864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.786886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.786893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.786899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.786914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.796735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.796811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.796828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.796834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.796841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.796856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.806850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.806958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.547 [2024-04-24 10:28:45.806975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.547 [2024-04-24 10:28:45.806982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.547 [2024-04-24 10:28:45.806988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.547 [2024-04-24 10:28:45.807003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.547 qpair failed and we were unable to recover it. 00:33:32.547 [2024-04-24 10:28:45.816896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.547 [2024-04-24 10:28:45.816973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.548 [2024-04-24 10:28:45.816990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.548 [2024-04-24 10:28:45.816997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.548 [2024-04-24 10:28:45.817003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.548 [2024-04-24 10:28:45.817019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.548 qpair failed and we were unable to recover it. 00:33:32.809 [2024-04-24 10:28:45.826957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.809 [2024-04-24 10:28:45.827066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.809 [2024-04-24 10:28:45.827087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.809 [2024-04-24 10:28:45.827094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.809 [2024-04-24 10:28:45.827103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.809 [2024-04-24 10:28:45.827119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.809 qpair failed and we were unable to recover it. 00:33:32.809 [2024-04-24 10:28:45.836956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.809 [2024-04-24 10:28:45.837028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.809 [2024-04-24 10:28:45.837044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.809 [2024-04-24 10:28:45.837051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.809 [2024-04-24 10:28:45.837057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.809 [2024-04-24 10:28:45.837077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.809 qpair failed and we were unable to recover it. 00:33:32.809 [2024-04-24 10:28:45.846974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.809 [2024-04-24 10:28:45.847046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.809 [2024-04-24 10:28:45.847063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.809 [2024-04-24 10:28:45.847077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.809 [2024-04-24 10:28:45.847083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.809 [2024-04-24 10:28:45.847099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.809 qpair failed and we were unable to recover it. 00:33:32.809 [2024-04-24 10:28:45.856947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.809 [2024-04-24 10:28:45.857022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.809 [2024-04-24 10:28:45.857039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.809 [2024-04-24 10:28:45.857045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.809 [2024-04-24 10:28:45.857052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.809 [2024-04-24 10:28:45.857067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.809 qpair failed and we were unable to recover it. 00:33:32.809 [2024-04-24 10:28:45.867040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.809 [2024-04-24 10:28:45.867122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.809 [2024-04-24 10:28:45.867139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.809 [2024-04-24 10:28:45.867147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.809 [2024-04-24 10:28:45.867152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.809 [2024-04-24 10:28:45.867168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.809 qpair failed and we were unable to recover it. 00:33:32.809 [2024-04-24 10:28:45.877061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.809 [2024-04-24 10:28:45.877147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.809 [2024-04-24 10:28:45.877163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.809 [2024-04-24 10:28:45.877170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.809 [2024-04-24 10:28:45.877176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.809 [2024-04-24 10:28:45.877192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.809 qpair failed and we were unable to recover it. 00:33:32.809 [2024-04-24 10:28:45.887085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.809 [2024-04-24 10:28:45.887160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.809 [2024-04-24 10:28:45.887176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.809 [2024-04-24 10:28:45.887184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.809 [2024-04-24 10:28:45.887190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.809 [2024-04-24 10:28:45.887205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.809 qpair failed and we were unable to recover it. 00:33:32.809 [2024-04-24 10:28:45.897129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.809 [2024-04-24 10:28:45.897208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.809 [2024-04-24 10:28:45.897224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.809 [2024-04-24 10:28:45.897231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.809 [2024-04-24 10:28:45.897237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.809 [2024-04-24 10:28:45.897252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.809 qpair failed and we were unable to recover it. 00:33:32.809 [2024-04-24 10:28:45.907130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.809 [2024-04-24 10:28:45.907208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.809 [2024-04-24 10:28:45.907224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.809 [2024-04-24 10:28:45.907231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.809 [2024-04-24 10:28:45.907238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.809 [2024-04-24 10:28:45.907253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.809 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:45.917181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:45.917259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:45.917275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:45.917282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:45.917291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:45.917308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:45.927209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:45.927282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:45.927298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:45.927305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:45.927312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:45.927327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:45.937259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:45.937335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:45.937351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:45.937358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:45.937364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:45.937379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:45.947273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:45.947348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:45.947364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:45.947371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:45.947377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:45.947392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:45.957309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:45.957383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:45.957400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:45.957407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:45.957413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:45.957428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:45.967351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:45.967426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:45.967443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:45.967449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:45.967456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:45.967471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:45.977295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:45.977372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:45.977388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:45.977395] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:45.977405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:45.977419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:45.987405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:45.987483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:45.987500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:45.987507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:45.987513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:45.987528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:45.997427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:45.997504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:45.997520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:45.997527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:45.997533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:45.997548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:46.007450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:46.007526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:46.007543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:46.007553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:46.007559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:46.007575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:46.017498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:46.017574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:46.017590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:46.017597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:46.017603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:46.017618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:46.027451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:46.027526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:46.027543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:46.027550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:46.027556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:46.027571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:46.037567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:46.037647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:46.037663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.810 [2024-04-24 10:28:46.037671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.810 [2024-04-24 10:28:46.037677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.810 [2024-04-24 10:28:46.037692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.810 qpair failed and we were unable to recover it. 00:33:32.810 [2024-04-24 10:28:46.047624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.810 [2024-04-24 10:28:46.047735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.810 [2024-04-24 10:28:46.047751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.811 [2024-04-24 10:28:46.047758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.811 [2024-04-24 10:28:46.047764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.811 [2024-04-24 10:28:46.047780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.811 qpair failed and we were unable to recover it. 00:33:32.811 [2024-04-24 10:28:46.057633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.811 [2024-04-24 10:28:46.057708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.811 [2024-04-24 10:28:46.057725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.811 [2024-04-24 10:28:46.057731] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.811 [2024-04-24 10:28:46.057737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.811 [2024-04-24 10:28:46.057752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.811 qpair failed and we were unable to recover it. 00:33:32.811 [2024-04-24 10:28:46.067685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.811 [2024-04-24 10:28:46.067759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.811 [2024-04-24 10:28:46.067776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.811 [2024-04-24 10:28:46.067783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.811 [2024-04-24 10:28:46.067789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.811 [2024-04-24 10:28:46.067804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.811 qpair failed and we were unable to recover it. 00:33:32.811 [2024-04-24 10:28:46.077675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.811 [2024-04-24 10:28:46.077774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.811 [2024-04-24 10:28:46.077790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.811 [2024-04-24 10:28:46.077798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.811 [2024-04-24 10:28:46.077803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:32.811 [2024-04-24 10:28:46.077818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:32.811 qpair failed and we were unable to recover it. 00:33:33.072 [2024-04-24 10:28:46.087760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.072 [2024-04-24 10:28:46.087838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.072 [2024-04-24 10:28:46.087854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.072 [2024-04-24 10:28:46.087861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.072 [2024-04-24 10:28:46.087868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.072 [2024-04-24 10:28:46.087883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.072 qpair failed and we were unable to recover it. 00:33:33.072 [2024-04-24 10:28:46.097734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.072 [2024-04-24 10:28:46.097806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.072 [2024-04-24 10:28:46.097826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.072 [2024-04-24 10:28:46.097833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.072 [2024-04-24 10:28:46.097839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.072 [2024-04-24 10:28:46.097854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.072 qpair failed and we were unable to recover it. 00:33:33.072 [2024-04-24 10:28:46.107764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.072 [2024-04-24 10:28:46.107845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.072 [2024-04-24 10:28:46.107862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.072 [2024-04-24 10:28:46.107868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.072 [2024-04-24 10:28:46.107875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.072 [2024-04-24 10:28:46.107890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.072 qpair failed and we were unable to recover it. 00:33:33.072 [2024-04-24 10:28:46.117735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.072 [2024-04-24 10:28:46.117811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.072 [2024-04-24 10:28:46.117826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.072 [2024-04-24 10:28:46.117833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.072 [2024-04-24 10:28:46.117839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.072 [2024-04-24 10:28:46.117853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.072 qpair failed and we were unable to recover it. 00:33:33.072 [2024-04-24 10:28:46.127825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.072 [2024-04-24 10:28:46.127903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.072 [2024-04-24 10:28:46.127919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.072 [2024-04-24 10:28:46.127926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.072 [2024-04-24 10:28:46.127934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.072 [2024-04-24 10:28:46.127950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.072 qpair failed and we were unable to recover it. 00:33:33.072 [2024-04-24 10:28:46.137852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.072 [2024-04-24 10:28:46.137931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.072 [2024-04-24 10:28:46.137948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.072 [2024-04-24 10:28:46.137955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.072 [2024-04-24 10:28:46.137961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.072 [2024-04-24 10:28:46.137976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.072 qpair failed and we were unable to recover it. 00:33:33.072 [2024-04-24 10:28:46.147870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.072 [2024-04-24 10:28:46.147944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.072 [2024-04-24 10:28:46.147961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.072 [2024-04-24 10:28:46.147968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.072 [2024-04-24 10:28:46.147974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.072 [2024-04-24 10:28:46.147989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.072 qpair failed and we were unable to recover it. 00:33:33.072 [2024-04-24 10:28:46.157892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.072 [2024-04-24 10:28:46.157973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.072 [2024-04-24 10:28:46.157989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.072 [2024-04-24 10:28:46.157996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.072 [2024-04-24 10:28:46.158003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.073 [2024-04-24 10:28:46.158018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.167909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.167985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.168004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.168012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.168019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.073 [2024-04-24 10:28:46.168034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.177935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.178033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.178058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.178068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.178082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.178102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.187995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.188121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.188143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.188151] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.188157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.188173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.198081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.198158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.198176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.198183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.198189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.198204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.207977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.208056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.208077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.208085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.208091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.208106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.218096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.218173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.218191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.218198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.218204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.218219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.228081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.228153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.228171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.228178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.228184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.228203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.238123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.238200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.238217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.238225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.238231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.238246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.248053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.248134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.248151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.248159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.248164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.248179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.258178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.258249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.258267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.258274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.258280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.258296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.268199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.268279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.268297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.268304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.268311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.268326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.278246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.278316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.278337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.278344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.278350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.278365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.288261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.288336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.073 [2024-04-24 10:28:46.288353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.073 [2024-04-24 10:28:46.288360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.073 [2024-04-24 10:28:46.288366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.073 [2024-04-24 10:28:46.288380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.073 qpair failed and we were unable to recover it. 00:33:33.073 [2024-04-24 10:28:46.298225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.073 [2024-04-24 10:28:46.298300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.074 [2024-04-24 10:28:46.298317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.074 [2024-04-24 10:28:46.298324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.074 [2024-04-24 10:28:46.298330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.074 [2024-04-24 10:28:46.298345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.074 qpair failed and we were unable to recover it. 00:33:33.074 [2024-04-24 10:28:46.308329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.074 [2024-04-24 10:28:46.308408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.074 [2024-04-24 10:28:46.308425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.074 [2024-04-24 10:28:46.308432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.074 [2024-04-24 10:28:46.308438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.074 [2024-04-24 10:28:46.308452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.074 qpair failed and we were unable to recover it. 00:33:33.074 [2024-04-24 10:28:46.318281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.074 [2024-04-24 10:28:46.318350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.074 [2024-04-24 10:28:46.318368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.074 [2024-04-24 10:28:46.318375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.074 [2024-04-24 10:28:46.318381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.074 [2024-04-24 10:28:46.318402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.074 qpair failed and we were unable to recover it. 00:33:33.074 [2024-04-24 10:28:46.328382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.074 [2024-04-24 10:28:46.328455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.074 [2024-04-24 10:28:46.328472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.074 [2024-04-24 10:28:46.328480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.074 [2024-04-24 10:28:46.328486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.074 [2024-04-24 10:28:46.328500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.074 qpair failed and we were unable to recover it. 00:33:33.074 [2024-04-24 10:28:46.338416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.074 [2024-04-24 10:28:46.338491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.074 [2024-04-24 10:28:46.338508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.074 [2024-04-24 10:28:46.338515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.074 [2024-04-24 10:28:46.338522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.074 [2024-04-24 10:28:46.338538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.074 qpair failed and we were unable to recover it. 00:33:33.074 [2024-04-24 10:28:46.348430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.074 [2024-04-24 10:28:46.348509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.074 [2024-04-24 10:28:46.348526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.074 [2024-04-24 10:28:46.348534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.074 [2024-04-24 10:28:46.348541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.074 [2024-04-24 10:28:46.348555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.074 qpair failed and we were unable to recover it. 00:33:33.334 [2024-04-24 10:28:46.358456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.334 [2024-04-24 10:28:46.358531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.334 [2024-04-24 10:28:46.358548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.334 [2024-04-24 10:28:46.358555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.334 [2024-04-24 10:28:46.358562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.334 [2024-04-24 10:28:46.358576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-04-24 10:28:46.368413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.334 [2024-04-24 10:28:46.368486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.334 [2024-04-24 10:28:46.368508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.334 [2024-04-24 10:28:46.368515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.334 [2024-04-24 10:28:46.368522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d3710 00:33:33.334 [2024-04-24 10:28:46.368537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-04-24 10:28:46.378479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.334 [2024-04-24 10:28:46.378578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.334 [2024-04-24 10:28:46.378606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.334 [2024-04-24 10:28:46.378618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.334 [2024-04-24 10:28:46.378627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:33.334 [2024-04-24 10:28:46.378651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-04-24 10:28:46.388521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.334 [2024-04-24 10:28:46.388600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.334 [2024-04-24 10:28:46.388617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.334 [2024-04-24 10:28:46.388624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.334 [2024-04-24 10:28:46.388631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aec000b90 00:33:33.334 [2024-04-24 10:28:46.388647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-04-24 10:28:46.398577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.334 [2024-04-24 10:28:46.398668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.334 [2024-04-24 10:28:46.398696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.334 [2024-04-24 10:28:46.398707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.334 [2024-04-24 10:28:46.398717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.334 [2024-04-24 10:28:46.398740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-04-24 10:28:46.408595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.334 [2024-04-24 10:28:46.408675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.334 [2024-04-24 10:28:46.408692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.334 [2024-04-24 10:28:46.408700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.334 [2024-04-24 10:28:46.408707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2afc000b90 00:33:33.334 [2024-04-24 10:28:46.408727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-04-24 10:28:46.408865] nvme_ctrlr.c:4325:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:33.334 A controller has encountered a failure and is being reset. 00:33:33.334 [2024-04-24 10:28:46.418613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.334 [2024-04-24 10:28:46.418874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.334 [2024-04-24 10:28:46.418896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.334 [2024-04-24 10:28:46.418905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.334 [2024-04-24 10:28:46.418911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:33.334 [2024-04-24 10:28:46.418931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-04-24 10:28:46.428655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.334 [2024-04-24 10:28:46.428731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.334 [2024-04-24 10:28:46.428748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.334 [2024-04-24 10:28:46.428755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.334 [2024-04-24 10:28:46.428762] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2af4000b90 00:33:33.334 [2024-04-24 10:28:46.428777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 Controller properly reset. 00:33:33.334 Initializing NVMe Controllers 00:33:33.334 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:33.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:33.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:33.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:33.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:33.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:33.334 Initialization complete. Launching workers. 00:33:33.334 Starting thread on core 1 00:33:33.334 Starting thread on core 2 00:33:33.334 Starting thread on core 3 00:33:33.334 Starting thread on core 0 00:33:33.334 10:28:46 -- host/target_disconnect.sh@59 -- # sync 00:33:33.334 00:33:33.334 real 0m11.370s 00:33:33.334 user 0m20.862s 00:33:33.334 sys 0m4.311s 00:33:33.334 10:28:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.334 10:28:46 -- common/autotest_common.sh@10 -- # set +x 00:33:33.335 ************************************ 00:33:33.335 END TEST nvmf_target_disconnect_tc2 00:33:33.335 ************************************ 00:33:33.335 10:28:46 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:33:33.335 10:28:46 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:33.335 10:28:46 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:33:33.335 10:28:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:33.335 10:28:46 -- nvmf/common.sh@116 -- # sync 00:33:33.335 10:28:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:33.335 10:28:46 -- nvmf/common.sh@119 -- # set +e 00:33:33.335 10:28:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:33.335 10:28:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:33.335 rmmod nvme_tcp 00:33:33.335 rmmod nvme_fabrics 00:33:33.335 rmmod nvme_keyring 00:33:33.335 10:28:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:33.335 10:28:46 -- nvmf/common.sh@123 -- # set -e 00:33:33.335 10:28:46 -- nvmf/common.sh@124 -- # return 0 00:33:33.335 10:28:46 -- nvmf/common.sh@477 -- # '[' -n 501469 ']' 00:33:33.335 10:28:46 -- nvmf/common.sh@478 -- # killprocess 501469 00:33:33.335 10:28:46 -- common/autotest_common.sh@926 -- # '[' -z 501469 ']' 00:33:33.335 10:28:46 -- common/autotest_common.sh@930 -- # kill -0 501469 00:33:33.335 10:28:46 -- common/autotest_common.sh@931 -- # uname 00:33:33.335 10:28:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:33.335 10:28:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 501469 00:33:33.335 10:28:46 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:33:33.335 10:28:46 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:33:33.335 10:28:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 501469' 00:33:33.335 killing process with pid 501469 00:33:33.335 10:28:46 -- common/autotest_common.sh@945 -- # kill 501469 00:33:33.335 10:28:46 -- common/autotest_common.sh@950 -- # wait 501469 00:33:33.594 10:28:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:33.594 10:28:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:33.594 10:28:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:33.594 10:28:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:33.594 10:28:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:33.594 10:28:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.594 10:28:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:33.594 10:28:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.130 10:28:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:36.130 00:33:36.130 real 0m19.312s 00:33:36.130 user 0m48.359s 00:33:36.130 sys 0m8.564s 00:33:36.130 10:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:36.130 10:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:36.130 ************************************ 00:33:36.130 END TEST nvmf_target_disconnect 00:33:36.130 ************************************ 00:33:36.130 10:28:48 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:36.130 10:28:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:36.130 10:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:36.130 10:28:48 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:36.130 00:33:36.130 real 22m55.637s 00:33:36.130 user 61m36.989s 00:33:36.130 sys 5m53.473s 00:33:36.130 10:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:36.130 10:28:48 -- common/autotest_common.sh@10 -- # set +x 00:33:36.130 ************************************ 00:33:36.130 END TEST nvmf_tcp 00:33:36.130 ************************************ 00:33:36.130 10:28:49 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:33:36.130 10:28:49 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:36.130 10:28:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:36.130 10:28:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:36.130 10:28:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.130 ************************************ 00:33:36.130 START TEST spdkcli_nvmf_tcp 00:33:36.130 ************************************ 00:33:36.130 10:28:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:36.130 * Looking for test storage... 00:33:36.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:36.130 10:28:49 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:36.130 10:28:49 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:36.130 10:28:49 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:36.130 10:28:49 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.130 10:28:49 -- nvmf/common.sh@7 -- # uname -s 00:33:36.130 10:28:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.130 10:28:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.130 10:28:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.130 10:28:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.130 10:28:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.130 10:28:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.130 10:28:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.130 10:28:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.130 10:28:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.130 10:28:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.130 10:28:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:36.130 10:28:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:36.130 10:28:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.130 10:28:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.130 10:28:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.131 10:28:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.131 10:28:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.131 10:28:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.131 10:28:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.131 10:28:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.131 10:28:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.131 10:28:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.131 10:28:49 -- paths/export.sh@5 -- # export PATH 00:33:36.131 10:28:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.131 10:28:49 -- nvmf/common.sh@46 -- # : 0 00:33:36.131 10:28:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:36.131 10:28:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:36.131 10:28:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:36.131 10:28:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.131 10:28:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.131 10:28:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:36.131 10:28:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:36.131 10:28:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:36.131 10:28:49 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:36.131 10:28:49 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:36.131 10:28:49 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:36.131 10:28:49 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:36.131 10:28:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:36.131 10:28:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.131 10:28:49 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:36.131 10:28:49 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=503012 00:33:36.131 10:28:49 -- spdkcli/common.sh@34 -- # waitforlisten 503012 00:33:36.131 10:28:49 -- common/autotest_common.sh@819 -- # '[' -z 503012 ']' 00:33:36.131 10:28:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.131 10:28:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:36.131 10:28:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.131 10:28:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:36.131 10:28:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.131 10:28:49 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:36.131 [2024-04-24 10:28:49.161320] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:36.131 [2024-04-24 10:28:49.161371] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503012 ] 00:33:36.131 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.131 [2024-04-24 10:28:49.216727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:36.131 [2024-04-24 10:28:49.294254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:36.131 [2024-04-24 10:28:49.294391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.131 [2024-04-24 10:28:49.294394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.698 10:28:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:36.698 10:28:49 -- common/autotest_common.sh@852 -- # return 0 00:33:36.698 10:28:49 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:36.698 10:28:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:36.698 10:28:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.956 10:28:49 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:36.957 10:28:49 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:36.957 10:28:49 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:36.957 10:28:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:36.957 10:28:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.957 10:28:49 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:36.957 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:36.957 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:36.957 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:36.957 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:36.957 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:36.957 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:36.957 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:36.957 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:36.957 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:36.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:36.957 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:36.957 ' 00:33:37.215 [2024-04-24 10:28:50.324462] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:39.120 [2024-04-24 10:28:52.368165] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.496 [2024-04-24 10:28:53.544212] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:43.028 [2024-04-24 10:28:55.707090] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:44.405 [2024-04-24 10:28:57.565202] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:45.783 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:45.783 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:45.783 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:45.783 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:45.783 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:45.783 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:45.783 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:45.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:45.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:45.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:45.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:45.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:45.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:45.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:45.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:45.784 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:45.784 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:46.042 10:28:59 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:46.042 10:28:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:46.042 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:33:46.042 10:28:59 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:46.042 10:28:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:46.042 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:33:46.042 10:28:59 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:46.042 10:28:59 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:46.301 10:28:59 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:46.301 10:28:59 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:46.301 10:28:59 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:46.301 10:28:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:46.301 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:33:46.301 10:28:59 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:46.301 10:28:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:46.301 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:33:46.301 10:28:59 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:46.301 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:46.301 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:46.301 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:46.301 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:46.301 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:46.301 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:46.301 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:46.301 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:46.301 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:46.301 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:46.301 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:46.301 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:46.301 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:46.301 ' 00:33:51.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:51.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:51.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:51.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:51.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:51.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:51.573 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:51.573 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:51.573 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:51.573 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:51.573 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:51.573 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:51.573 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:51.573 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:51.573 10:29:04 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:51.573 10:29:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:51.573 10:29:04 -- common/autotest_common.sh@10 -- # set +x 00:33:51.573 10:29:04 -- spdkcli/nvmf.sh@90 -- # killprocess 503012 00:33:51.573 10:29:04 -- common/autotest_common.sh@926 -- # '[' -z 503012 ']' 00:33:51.573 10:29:04 -- common/autotest_common.sh@930 -- # kill -0 503012 00:33:51.573 10:29:04 -- common/autotest_common.sh@931 -- # uname 00:33:51.574 10:29:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:51.574 10:29:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 503012 00:33:51.574 10:29:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:51.574 10:29:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:51.574 10:29:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 503012' 00:33:51.574 killing process with pid 503012 00:33:51.574 10:29:04 -- common/autotest_common.sh@945 -- # kill 503012 00:33:51.574 [2024-04-24 10:29:04.597541] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:51.574 10:29:04 -- common/autotest_common.sh@950 -- # wait 503012 00:33:51.574 10:29:04 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:51.574 10:29:04 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:51.574 10:29:04 -- spdkcli/common.sh@13 -- # '[' -n 503012 ']' 00:33:51.574 10:29:04 -- spdkcli/common.sh@14 -- # killprocess 503012 00:33:51.574 10:29:04 -- common/autotest_common.sh@926 -- # '[' -z 503012 ']' 00:33:51.574 10:29:04 -- common/autotest_common.sh@930 -- # kill -0 503012 00:33:51.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (503012) - No such process 00:33:51.574 10:29:04 -- common/autotest_common.sh@953 -- # echo 'Process with pid 503012 is not found' 00:33:51.574 Process with pid 503012 is not found 00:33:51.574 10:29:04 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:51.574 10:29:04 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:51.574 10:29:04 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:51.574 00:33:51.574 real 0m15.790s 00:33:51.574 user 0m32.625s 00:33:51.574 sys 0m0.716s 00:33:51.574 10:29:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:51.574 10:29:04 -- common/autotest_common.sh@10 -- # set +x 00:33:51.574 ************************************ 00:33:51.574 END TEST spdkcli_nvmf_tcp 00:33:51.574 ************************************ 00:33:51.574 10:29:04 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:51.574 10:29:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:51.574 10:29:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:51.574 10:29:04 -- common/autotest_common.sh@10 -- # set +x 00:33:51.574 ************************************ 00:33:51.574 START TEST nvmf_identify_passthru 00:33:51.574 ************************************ 00:33:51.574 10:29:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:51.879 * Looking for test storage... 00:33:51.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:51.879 10:29:04 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.879 10:29:04 -- nvmf/common.sh@7 -- # uname -s 00:33:51.879 10:29:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.879 10:29:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.879 10:29:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.879 10:29:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.879 10:29:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.879 10:29:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.879 10:29:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.879 10:29:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.879 10:29:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.879 10:29:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.879 10:29:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:51.879 10:29:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:51.879 10:29:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.879 10:29:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.879 10:29:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.879 10:29:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.879 10:29:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.879 10:29:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.879 10:29:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.879 10:29:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.879 10:29:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.879 10:29:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.879 10:29:04 -- paths/export.sh@5 -- # export PATH 00:33:51.880 10:29:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.880 10:29:04 -- nvmf/common.sh@46 -- # : 0 00:33:51.880 10:29:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:51.880 10:29:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:51.880 10:29:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:51.880 10:29:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.880 10:29:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.880 10:29:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:51.880 10:29:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:51.880 10:29:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:51.880 10:29:04 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.880 10:29:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.880 10:29:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.880 10:29:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.880 10:29:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.880 10:29:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.880 10:29:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.880 10:29:04 -- paths/export.sh@5 -- # export PATH 00:33:51.880 10:29:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.880 10:29:04 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:51.880 10:29:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:51.880 10:29:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.880 10:29:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:51.880 10:29:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:51.880 10:29:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:51.880 10:29:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.880 10:29:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:51.880 10:29:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.880 10:29:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:51.880 10:29:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:51.880 10:29:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:51.880 10:29:04 -- common/autotest_common.sh@10 -- # set +x 00:33:57.183 10:29:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:57.183 10:29:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:57.183 10:29:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:57.183 10:29:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:57.183 10:29:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:57.183 10:29:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:57.183 10:29:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:57.183 10:29:09 -- nvmf/common.sh@294 -- # net_devs=() 00:33:57.183 10:29:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:57.183 10:29:09 -- nvmf/common.sh@295 -- # e810=() 00:33:57.183 10:29:09 -- nvmf/common.sh@295 -- # local -ga e810 00:33:57.183 10:29:09 -- nvmf/common.sh@296 -- # x722=() 00:33:57.183 10:29:09 -- nvmf/common.sh@296 -- # local -ga x722 00:33:57.183 10:29:09 -- nvmf/common.sh@297 -- # mlx=() 00:33:57.183 10:29:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:57.183 10:29:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.183 10:29:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:57.183 10:29:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:57.183 10:29:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:57.183 10:29:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:57.183 10:29:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:57.183 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:57.183 10:29:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:57.183 10:29:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:57.183 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:57.183 10:29:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:57.183 10:29:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:57.183 10:29:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.183 10:29:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:57.183 10:29:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.183 10:29:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:57.183 Found net devices under 0000:86:00.0: cvl_0_0 00:33:57.183 10:29:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.183 10:29:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:57.183 10:29:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.183 10:29:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:57.183 10:29:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.183 10:29:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:57.183 Found net devices under 0000:86:00.1: cvl_0_1 00:33:57.183 10:29:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.183 10:29:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:57.183 10:29:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:57.183 10:29:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:57.183 10:29:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:57.183 10:29:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.183 10:29:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.183 10:29:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.183 10:29:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:57.183 10:29:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.183 10:29:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.183 10:29:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:57.183 10:29:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.183 10:29:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.183 10:29:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:57.183 10:29:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:57.183 10:29:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.183 10:29:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.183 10:29:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.183 10:29:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.183 10:29:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:57.183 10:29:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.184 10:29:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.184 10:29:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.184 10:29:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:57.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:33:57.184 00:33:57.184 --- 10.0.0.2 ping statistics --- 00:33:57.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.184 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:33:57.184 10:29:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:33:57.184 00:33:57.184 --- 10.0.0.1 ping statistics --- 00:33:57.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.184 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:33:57.184 10:29:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.184 10:29:09 -- nvmf/common.sh@410 -- # return 0 00:33:57.184 10:29:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:57.184 10:29:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:57.184 10:29:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:57.184 10:29:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:57.184 10:29:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:57.184 10:29:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:57.184 10:29:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:57.184 10:29:09 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:57.184 10:29:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:57.184 10:29:09 -- common/autotest_common.sh@10 -- # set +x 00:33:57.184 10:29:09 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:57.184 10:29:09 -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:57.184 10:29:09 -- common/autotest_common.sh@1509 -- # local bdfs 00:33:57.184 10:29:09 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:57.184 10:29:09 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:57.184 10:29:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:57.184 10:29:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:57.184 10:29:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:57.184 10:29:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:57.184 10:29:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:57.184 10:29:09 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:57.184 10:29:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:57.184 10:29:09 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:57.184 10:29:09 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:57.184 10:29:09 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:57.184 10:29:09 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:57.184 10:29:09 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:57.184 10:29:09 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:57.184 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.378 10:29:14 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:34:01.379 10:29:14 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:01.379 10:29:14 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:01.379 10:29:14 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:01.379 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.572 10:29:18 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:05.572 10:29:18 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:05.572 10:29:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:05.572 10:29:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.572 10:29:18 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:05.572 10:29:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:05.572 10:29:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.572 10:29:18 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:05.572 10:29:18 -- target/identify_passthru.sh@31 -- # nvmfpid=509896 00:34:05.572 10:29:18 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:05.572 10:29:18 -- target/identify_passthru.sh@35 -- # waitforlisten 509896 00:34:05.572 10:29:18 -- common/autotest_common.sh@819 -- # '[' -z 509896 ']' 00:34:05.572 10:29:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.572 10:29:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:05.572 10:29:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.572 10:29:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:05.572 10:29:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.572 [2024-04-24 10:29:18.318959] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:34:05.572 [2024-04-24 10:29:18.319005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.572 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.572 [2024-04-24 10:29:18.377488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:05.572 [2024-04-24 10:29:18.455911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:05.572 [2024-04-24 10:29:18.456037] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.572 [2024-04-24 10:29:18.456045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.572 [2024-04-24 10:29:18.456051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.572 [2024-04-24 10:29:18.456095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.572 [2024-04-24 10:29:18.456144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:05.572 [2024-04-24 10:29:18.456337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:05.572 [2024-04-24 10:29:18.456339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.140 10:29:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:06.140 10:29:19 -- common/autotest_common.sh@852 -- # return 0 00:34:06.140 10:29:19 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:06.140 10:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:06.140 10:29:19 -- common/autotest_common.sh@10 -- # set +x 00:34:06.140 INFO: Log level set to 20 00:34:06.140 INFO: Requests: 00:34:06.140 { 00:34:06.140 "jsonrpc": "2.0", 00:34:06.140 "method": "nvmf_set_config", 00:34:06.140 "id": 1, 00:34:06.140 "params": { 00:34:06.140 "admin_cmd_passthru": { 00:34:06.140 "identify_ctrlr": true 00:34:06.140 } 00:34:06.140 } 00:34:06.140 } 00:34:06.140 00:34:06.140 INFO: response: 00:34:06.140 { 00:34:06.140 "jsonrpc": "2.0", 00:34:06.140 "id": 1, 00:34:06.140 "result": true 00:34:06.140 } 00:34:06.140 00:34:06.140 10:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:06.140 10:29:19 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:06.140 10:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:06.140 10:29:19 -- common/autotest_common.sh@10 -- # set +x 00:34:06.140 INFO: Setting log level to 20 00:34:06.140 INFO: Setting log level to 20 00:34:06.140 INFO: Log level set to 20 00:34:06.140 INFO: Log level set to 20 00:34:06.140 INFO: Requests: 00:34:06.140 { 00:34:06.140 "jsonrpc": "2.0", 00:34:06.140 "method": "framework_start_init", 00:34:06.140 "id": 1 00:34:06.140 } 00:34:06.140 00:34:06.140 INFO: Requests: 00:34:06.140 { 00:34:06.140 "jsonrpc": "2.0", 00:34:06.140 "method": "framework_start_init", 00:34:06.140 "id": 1 00:34:06.140 } 00:34:06.140 00:34:06.140 [2024-04-24 10:29:19.225977] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:06.140 INFO: response: 00:34:06.140 { 00:34:06.140 "jsonrpc": "2.0", 00:34:06.140 "id": 1, 00:34:06.140 "result": true 00:34:06.140 } 00:34:06.140 00:34:06.140 INFO: response: 00:34:06.140 { 00:34:06.140 "jsonrpc": "2.0", 00:34:06.140 "id": 1, 00:34:06.140 "result": true 00:34:06.140 } 00:34:06.140 00:34:06.140 10:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:06.140 10:29:19 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:06.140 10:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:06.140 10:29:19 -- common/autotest_common.sh@10 -- # set +x 00:34:06.140 INFO: Setting log level to 40 00:34:06.140 INFO: Setting log level to 40 00:34:06.140 INFO: Setting log level to 40 00:34:06.140 [2024-04-24 10:29:19.239421] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.140 10:29:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:06.140 10:29:19 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:06.140 10:29:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:06.140 10:29:19 -- common/autotest_common.sh@10 -- # set +x 00:34:06.140 10:29:19 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:06.140 10:29:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:06.140 10:29:19 -- common/autotest_common.sh@10 -- # set +x 00:34:09.425 Nvme0n1 00:34:09.425 10:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.425 10:29:22 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:09.425 10:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.425 10:29:22 -- common/autotest_common.sh@10 -- # set +x 00:34:09.425 10:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.425 10:29:22 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:09.425 10:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.425 10:29:22 -- common/autotest_common.sh@10 -- # set +x 00:34:09.425 10:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.425 10:29:22 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:09.425 10:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.425 10:29:22 -- common/autotest_common.sh@10 -- # set +x 00:34:09.425 [2024-04-24 10:29:22.133392] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.426 10:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.426 10:29:22 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:09.426 10:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.426 10:29:22 -- common/autotest_common.sh@10 -- # set +x 00:34:09.426 [2024-04-24 10:29:22.141189] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:09.426 [ 00:34:09.426 { 00:34:09.426 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:09.426 "subtype": "Discovery", 00:34:09.426 "listen_addresses": [], 00:34:09.426 "allow_any_host": true, 00:34:09.426 "hosts": [] 00:34:09.426 }, 00:34:09.426 { 00:34:09.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.426 "subtype": "NVMe", 00:34:09.426 "listen_addresses": [ 00:34:09.426 { 00:34:09.426 "transport": "TCP", 00:34:09.426 "trtype": "TCP", 00:34:09.426 "adrfam": "IPv4", 00:34:09.426 "traddr": "10.0.0.2", 00:34:09.426 "trsvcid": "4420" 00:34:09.426 } 00:34:09.426 ], 00:34:09.426 "allow_any_host": true, 00:34:09.426 "hosts": [], 00:34:09.426 "serial_number": "SPDK00000000000001", 00:34:09.426 "model_number": "SPDK bdev Controller", 00:34:09.426 "max_namespaces": 1, 00:34:09.426 "min_cntlid": 1, 00:34:09.426 "max_cntlid": 65519, 00:34:09.426 "namespaces": [ 00:34:09.426 { 00:34:09.426 "nsid": 1, 00:34:09.426 "bdev_name": "Nvme0n1", 00:34:09.426 "name": "Nvme0n1", 00:34:09.426 "nguid": "A9350DD19B1B452588F9CED130155369", 00:34:09.426 "uuid": "a9350dd1-9b1b-4525-88f9-ced130155369" 00:34:09.426 } 00:34:09.426 ] 00:34:09.426 } 00:34:09.426 ] 00:34:09.426 10:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.426 10:29:22 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:09.426 10:29:22 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:09.426 10:29:22 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:09.426 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.426 10:29:22 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:34:09.426 10:29:22 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:09.426 10:29:22 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:09.426 10:29:22 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:09.426 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.426 10:29:22 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:09.426 10:29:22 -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:34:09.426 10:29:22 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:09.426 10:29:22 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.426 10:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.426 10:29:22 -- common/autotest_common.sh@10 -- # set +x 00:34:09.426 10:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.426 10:29:22 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:09.426 10:29:22 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:09.426 10:29:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:09.426 10:29:22 -- nvmf/common.sh@116 -- # sync 00:34:09.426 10:29:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:09.426 10:29:22 -- nvmf/common.sh@119 -- # set +e 00:34:09.426 10:29:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:09.426 10:29:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:09.426 rmmod nvme_tcp 00:34:09.426 rmmod nvme_fabrics 00:34:09.426 rmmod nvme_keyring 00:34:09.426 10:29:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:09.426 10:29:22 -- nvmf/common.sh@123 -- # set -e 00:34:09.426 10:29:22 -- nvmf/common.sh@124 -- # return 0 00:34:09.426 10:29:22 -- nvmf/common.sh@477 -- # '[' -n 509896 ']' 00:34:09.426 10:29:22 -- nvmf/common.sh@478 -- # killprocess 509896 00:34:09.426 10:29:22 -- common/autotest_common.sh@926 -- # '[' -z 509896 ']' 00:34:09.426 10:29:22 -- common/autotest_common.sh@930 -- # kill -0 509896 00:34:09.426 10:29:22 -- common/autotest_common.sh@931 -- # uname 00:34:09.426 10:29:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:09.426 10:29:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 509896 00:34:09.426 10:29:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:09.426 10:29:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:09.426 10:29:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 509896' 00:34:09.426 killing process with pid 509896 00:34:09.426 10:29:22 -- common/autotest_common.sh@945 -- # kill 509896 00:34:09.426 [2024-04-24 10:29:22.526349] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:09.426 10:29:22 -- common/autotest_common.sh@950 -- # wait 509896 00:34:10.802 10:29:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:10.803 10:29:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:10.803 10:29:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:10.803 10:29:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:10.803 10:29:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:10.803 10:29:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.803 10:29:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:10.803 10:29:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.342 10:29:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:13.342 00:34:13.342 real 0m21.233s 00:34:13.342 user 0m29.423s 00:34:13.342 sys 0m4.470s 00:34:13.342 10:29:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:13.342 10:29:26 -- common/autotest_common.sh@10 -- # set +x 00:34:13.342 ************************************ 00:34:13.342 END TEST nvmf_identify_passthru 00:34:13.342 ************************************ 00:34:13.342 10:29:26 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:13.342 10:29:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:13.342 10:29:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:13.342 10:29:26 -- common/autotest_common.sh@10 -- # set +x 00:34:13.342 ************************************ 00:34:13.342 START TEST nvmf_dif 00:34:13.342 ************************************ 00:34:13.342 10:29:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:13.342 * Looking for test storage... 00:34:13.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.342 10:29:26 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.342 10:29:26 -- nvmf/common.sh@7 -- # uname -s 00:34:13.342 10:29:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.342 10:29:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.342 10:29:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.342 10:29:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.342 10:29:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.342 10:29:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.342 10:29:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.342 10:29:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.342 10:29:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.342 10:29:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.342 10:29:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:13.342 10:29:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:13.342 10:29:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.342 10:29:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.342 10:29:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.342 10:29:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.342 10:29:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.342 10:29:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.342 10:29:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.342 10:29:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.342 10:29:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.342 10:29:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.342 10:29:26 -- paths/export.sh@5 -- # export PATH 00:34:13.342 10:29:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.342 10:29:26 -- nvmf/common.sh@46 -- # : 0 00:34:13.342 10:29:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:13.342 10:29:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:13.342 10:29:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:13.342 10:29:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.342 10:29:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.342 10:29:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:13.342 10:29:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:13.342 10:29:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:13.342 10:29:26 -- target/dif.sh@15 -- # NULL_META=16 00:34:13.342 10:29:26 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:13.342 10:29:26 -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:13.342 10:29:26 -- target/dif.sh@15 -- # NULL_DIF=1 00:34:13.342 10:29:26 -- target/dif.sh@135 -- # nvmftestinit 00:34:13.342 10:29:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:13.342 10:29:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.342 10:29:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:13.342 10:29:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:13.342 10:29:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:13.342 10:29:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.342 10:29:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.342 10:29:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.342 10:29:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:13.343 10:29:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:13.343 10:29:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:13.343 10:29:26 -- common/autotest_common.sh@10 -- # set +x 00:34:18.618 10:29:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:18.618 10:29:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:18.618 10:29:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:18.618 10:29:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:18.618 10:29:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:18.618 10:29:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:18.618 10:29:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:18.618 10:29:31 -- nvmf/common.sh@294 -- # net_devs=() 00:34:18.618 10:29:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:18.618 10:29:31 -- nvmf/common.sh@295 -- # e810=() 00:34:18.618 10:29:31 -- nvmf/common.sh@295 -- # local -ga e810 00:34:18.618 10:29:31 -- nvmf/common.sh@296 -- # x722=() 00:34:18.618 10:29:31 -- nvmf/common.sh@296 -- # local -ga x722 00:34:18.618 10:29:31 -- nvmf/common.sh@297 -- # mlx=() 00:34:18.618 10:29:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:18.618 10:29:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.618 10:29:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:18.618 10:29:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:18.618 10:29:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:18.619 10:29:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:18.619 10:29:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:18.619 10:29:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:18.619 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:18.619 10:29:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:18.619 10:29:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:18.619 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:18.619 10:29:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:18.619 10:29:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:18.619 10:29:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.619 10:29:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:18.619 10:29:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.619 10:29:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:18.619 Found net devices under 0000:86:00.0: cvl_0_0 00:34:18.619 10:29:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.619 10:29:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:18.619 10:29:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.619 10:29:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:18.619 10:29:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.619 10:29:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:18.619 Found net devices under 0000:86:00.1: cvl_0_1 00:34:18.619 10:29:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.619 10:29:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:18.619 10:29:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:18.619 10:29:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:18.619 10:29:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:18.619 10:29:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.619 10:29:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.619 10:29:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.619 10:29:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:18.619 10:29:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.619 10:29:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.619 10:29:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:18.619 10:29:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.619 10:29:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.619 10:29:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:18.619 10:29:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:18.619 10:29:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.619 10:29:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.619 10:29:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.619 10:29:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.619 10:29:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:18.619 10:29:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.619 10:29:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.619 10:29:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.619 10:29:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:18.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:34:18.619 00:34:18.619 --- 10.0.0.2 ping statistics --- 00:34:18.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.619 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:34:18.619 10:29:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:34:18.619 00:34:18.619 --- 10.0.0.1 ping statistics --- 00:34:18.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.619 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:34:18.619 10:29:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.619 10:29:31 -- nvmf/common.sh@410 -- # return 0 00:34:18.619 10:29:31 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:18.619 10:29:31 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:21.155 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:21.155 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:21.155 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:21.155 10:29:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:21.155 10:29:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:21.155 10:29:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:21.155 10:29:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:21.155 10:29:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:21.155 10:29:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:21.155 10:29:34 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:21.155 10:29:34 -- target/dif.sh@137 -- # nvmfappstart 00:34:21.155 10:29:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:21.155 10:29:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:21.155 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:21.155 10:29:34 -- nvmf/common.sh@469 -- # nvmfpid=515407 00:34:21.155 10:29:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:21.155 10:29:34 -- nvmf/common.sh@470 -- # waitforlisten 515407 00:34:21.155 10:29:34 -- common/autotest_common.sh@819 -- # '[' -z 515407 ']' 00:34:21.155 10:29:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.155 10:29:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:21.155 10:29:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.155 10:29:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:21.155 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:21.155 [2024-04-24 10:29:34.135408] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:34:21.155 [2024-04-24 10:29:34.135454] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.155 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.155 [2024-04-24 10:29:34.193903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.155 [2024-04-24 10:29:34.271421] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:21.155 [2024-04-24 10:29:34.271526] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.155 [2024-04-24 10:29:34.271534] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.155 [2024-04-24 10:29:34.271541] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.155 [2024-04-24 10:29:34.271560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.722 10:29:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:21.722 10:29:34 -- common/autotest_common.sh@852 -- # return 0 00:34:21.722 10:29:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:21.722 10:29:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:21.722 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:21.722 10:29:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.722 10:29:34 -- target/dif.sh@139 -- # create_transport 00:34:21.722 10:29:34 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:21.722 10:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.722 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:21.722 [2024-04-24 10:29:34.963453] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.722 10:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.722 10:29:34 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:21.722 10:29:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:21.722 10:29:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:21.722 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:21.722 ************************************ 00:34:21.722 START TEST fio_dif_1_default 00:34:21.722 ************************************ 00:34:21.722 10:29:34 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:34:21.722 10:29:34 -- target/dif.sh@86 -- # create_subsystems 0 00:34:21.722 10:29:34 -- target/dif.sh@28 -- # local sub 00:34:21.722 10:29:34 -- target/dif.sh@30 -- # for sub in "$@" 00:34:21.722 10:29:34 -- target/dif.sh@31 -- # create_subsystem 0 00:34:21.722 10:29:34 -- target/dif.sh@18 -- # local sub_id=0 00:34:21.722 10:29:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:21.722 10:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.722 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:21.722 bdev_null0 00:34:21.722 10:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.722 10:29:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:21.722 10:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.722 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:21.722 10:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.722 10:29:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:21.722 10:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.722 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 10:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.981 10:29:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:21.981 10:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.981 10:29:34 -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 [2024-04-24 10:29:35.003672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.981 10:29:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.981 10:29:35 -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:21.981 10:29:35 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:21.981 10:29:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:21.981 10:29:35 -- nvmf/common.sh@520 -- # config=() 00:34:21.981 10:29:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.981 10:29:35 -- nvmf/common.sh@520 -- # local subsystem config 00:34:21.981 10:29:35 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.981 10:29:35 -- target/dif.sh@82 -- # gen_fio_conf 00:34:21.981 10:29:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:21.981 10:29:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:21.981 10:29:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:21.981 { 00:34:21.981 "params": { 00:34:21.981 "name": "Nvme$subsystem", 00:34:21.981 "trtype": "$TEST_TRANSPORT", 00:34:21.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.981 "adrfam": "ipv4", 00:34:21.981 "trsvcid": "$NVMF_PORT", 00:34:21.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.981 "hdgst": ${hdgst:-false}, 00:34:21.981 "ddgst": ${ddgst:-false} 00:34:21.981 }, 00:34:21.981 "method": "bdev_nvme_attach_controller" 00:34:21.981 } 00:34:21.981 EOF 00:34:21.981 )") 00:34:21.981 10:29:35 -- target/dif.sh@54 -- # local file 00:34:21.981 10:29:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:21.981 10:29:35 -- target/dif.sh@56 -- # cat 00:34:21.981 10:29:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:21.981 10:29:35 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.981 10:29:35 -- nvmf/common.sh@542 -- # cat 00:34:21.981 10:29:35 -- common/autotest_common.sh@1320 -- # shift 00:34:21.981 10:29:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:21.981 10:29:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:21.981 10:29:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:21.981 10:29:35 -- target/dif.sh@72 -- # (( file <= files )) 00:34:21.981 10:29:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.981 10:29:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:21.981 10:29:35 -- nvmf/common.sh@544 -- # jq . 00:34:21.981 10:29:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:21.981 10:29:35 -- nvmf/common.sh@545 -- # IFS=, 00:34:21.981 10:29:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:21.981 "params": { 00:34:21.981 "name": "Nvme0", 00:34:21.981 "trtype": "tcp", 00:34:21.981 "traddr": "10.0.0.2", 00:34:21.981 "adrfam": "ipv4", 00:34:21.981 "trsvcid": "4420", 00:34:21.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:21.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:21.981 "hdgst": false, 00:34:21.981 "ddgst": false 00:34:21.981 }, 00:34:21.981 "method": "bdev_nvme_attach_controller" 00:34:21.981 }' 00:34:21.981 10:29:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:21.981 10:29:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:21.981 10:29:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:21.981 10:29:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.981 10:29:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:21.981 10:29:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:21.981 10:29:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:21.981 10:29:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:21.981 10:29:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:21.981 10:29:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.240 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:22.240 fio-3.35 00:34:22.240 Starting 1 thread 00:34:22.240 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.807 [2024-04-24 10:29:35.850345] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:22.807 [2024-04-24 10:29:35.850397] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:32.786 00:34:32.786 filename0: (groupid=0, jobs=1): err= 0: pid=515931: Wed Apr 24 10:29:45 2024 00:34:32.786 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:34:32.786 slat (nsec): min=5845, max=24168, avg=6266.59, stdev=1534.39 00:34:32.786 clat (usec): min=747, max=43071, avg=21080.02, stdev=20205.28 00:34:32.786 lat (usec): min=753, max=43095, avg=21086.29, stdev=20205.29 00:34:32.786 clat percentiles (usec): 00:34:32.786 | 1.00th=[ 758], 5.00th=[ 766], 10.00th=[ 775], 20.00th=[ 791], 00:34:32.786 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[41157], 60.00th=[41157], 00:34:32.786 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:32.786 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:34:32.786 | 99.99th=[43254] 00:34:32.786 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=23.47, samples=19 00:34:32.786 iops : min= 168, max= 192, avg=189.89, stdev= 5.87, samples=19 00:34:32.786 lat (usec) : 750=0.21%, 1000=49.58% 00:34:32.786 lat (msec) : 50=50.21% 00:34:32.786 cpu : usr=94.80%, sys=4.96%, ctx=11, majf=0, minf=258 00:34:32.786 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:32.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.786 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.786 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:32.786 00:34:32.786 Run status group 0 (all jobs): 00:34:32.786 READ: bw=758KiB/s (777kB/s), 758KiB/s-758KiB/s (777kB/s-777kB/s), io=7584KiB (7766kB), run=10001-10001msec 00:34:33.045 10:29:46 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:33.045 10:29:46 -- target/dif.sh@43 -- # local sub 00:34:33.045 10:29:46 -- target/dif.sh@45 -- # for sub in "$@" 00:34:33.045 10:29:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:33.045 10:29:46 -- target/dif.sh@36 -- # local sub_id=0 00:34:33.045 10:29:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:33.045 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.045 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.045 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.045 10:29:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:33.045 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.045 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.045 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.045 00:34:33.045 real 0m11.198s 00:34:33.045 user 0m16.391s 00:34:33.045 sys 0m0.757s 00:34:33.045 10:29:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:33.045 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.045 ************************************ 00:34:33.045 END TEST fio_dif_1_default 00:34:33.045 ************************************ 00:34:33.045 10:29:46 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:33.045 10:29:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:33.045 10:29:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:33.045 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.045 ************************************ 00:34:33.045 START TEST fio_dif_1_multi_subsystems 00:34:33.045 ************************************ 00:34:33.045 10:29:46 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:34:33.045 10:29:46 -- target/dif.sh@92 -- # local files=1 00:34:33.045 10:29:46 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:33.045 10:29:46 -- target/dif.sh@28 -- # local sub 00:34:33.045 10:29:46 -- target/dif.sh@30 -- # for sub in "$@" 00:34:33.045 10:29:46 -- target/dif.sh@31 -- # create_subsystem 0 00:34:33.045 10:29:46 -- target/dif.sh@18 -- # local sub_id=0 00:34:33.045 10:29:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:33.045 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.046 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.046 bdev_null0 00:34:33.046 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.046 10:29:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:33.046 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.046 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.046 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.046 10:29:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:33.046 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.046 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.046 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.046 10:29:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:33.046 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.046 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.046 [2024-04-24 10:29:46.243584] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.046 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.046 10:29:46 -- target/dif.sh@30 -- # for sub in "$@" 00:34:33.046 10:29:46 -- target/dif.sh@31 -- # create_subsystem 1 00:34:33.046 10:29:46 -- target/dif.sh@18 -- # local sub_id=1 00:34:33.046 10:29:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:33.046 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.046 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.046 bdev_null1 00:34:33.046 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.046 10:29:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:33.046 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.046 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.046 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.046 10:29:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:33.046 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.046 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.046 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.046 10:29:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:33.046 10:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.046 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:34:33.046 10:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.046 10:29:46 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:33.046 10:29:46 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:33.046 10:29:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:33.046 10:29:46 -- nvmf/common.sh@520 -- # config=() 00:34:33.046 10:29:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.046 10:29:46 -- nvmf/common.sh@520 -- # local subsystem config 00:34:33.046 10:29:46 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.046 10:29:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:33.046 10:29:46 -- target/dif.sh@82 -- # gen_fio_conf 00:34:33.046 10:29:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:33.046 { 00:34:33.046 "params": { 00:34:33.046 "name": "Nvme$subsystem", 00:34:33.046 "trtype": "$TEST_TRANSPORT", 00:34:33.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.046 "adrfam": "ipv4", 00:34:33.046 "trsvcid": "$NVMF_PORT", 00:34:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.046 "hdgst": ${hdgst:-false}, 00:34:33.046 "ddgst": ${ddgst:-false} 00:34:33.046 }, 00:34:33.046 "method": "bdev_nvme_attach_controller" 00:34:33.046 } 00:34:33.046 EOF 00:34:33.046 )") 00:34:33.046 10:29:46 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:33.046 10:29:46 -- target/dif.sh@54 -- # local file 00:34:33.046 10:29:46 -- target/dif.sh@56 -- # cat 00:34:33.046 10:29:46 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:33.046 10:29:46 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:33.046 10:29:46 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.046 10:29:46 -- common/autotest_common.sh@1320 -- # shift 00:34:33.046 10:29:46 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:33.046 10:29:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.046 10:29:46 -- nvmf/common.sh@542 -- # cat 00:34:33.046 10:29:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:33.046 10:29:46 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:33.046 10:29:46 -- target/dif.sh@72 -- # (( file <= files )) 00:34:33.046 10:29:46 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.046 10:29:46 -- target/dif.sh@73 -- # cat 00:34:33.046 10:29:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:33.046 10:29:46 -- target/dif.sh@72 -- # (( file++ )) 00:34:33.046 10:29:46 -- target/dif.sh@72 -- # (( file <= files )) 00:34:33.046 10:29:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:33.046 10:29:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:33.046 { 00:34:33.046 "params": { 00:34:33.046 "name": "Nvme$subsystem", 00:34:33.046 "trtype": "$TEST_TRANSPORT", 00:34:33.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.046 "adrfam": "ipv4", 00:34:33.046 "trsvcid": "$NVMF_PORT", 00:34:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.046 "hdgst": ${hdgst:-false}, 00:34:33.046 "ddgst": ${ddgst:-false} 00:34:33.046 }, 00:34:33.046 "method": "bdev_nvme_attach_controller" 00:34:33.046 } 00:34:33.046 EOF 00:34:33.046 )") 00:34:33.046 10:29:46 -- nvmf/common.sh@542 -- # cat 00:34:33.046 10:29:46 -- nvmf/common.sh@544 -- # jq . 00:34:33.046 10:29:46 -- nvmf/common.sh@545 -- # IFS=, 00:34:33.046 10:29:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:33.046 "params": { 00:34:33.046 "name": "Nvme0", 00:34:33.046 "trtype": "tcp", 00:34:33.046 "traddr": "10.0.0.2", 00:34:33.046 "adrfam": "ipv4", 00:34:33.046 "trsvcid": "4420", 00:34:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:33.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:33.046 "hdgst": false, 00:34:33.046 "ddgst": false 00:34:33.046 }, 00:34:33.046 "method": "bdev_nvme_attach_controller" 00:34:33.046 },{ 00:34:33.046 "params": { 00:34:33.046 "name": "Nvme1", 00:34:33.046 "trtype": "tcp", 00:34:33.046 "traddr": "10.0.0.2", 00:34:33.046 "adrfam": "ipv4", 00:34:33.046 "trsvcid": "4420", 00:34:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:33.046 "hdgst": false, 00:34:33.046 "ddgst": false 00:34:33.046 }, 00:34:33.046 "method": "bdev_nvme_attach_controller" 00:34:33.046 }' 00:34:33.046 10:29:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:33.046 10:29:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:33.046 10:29:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.046 10:29:46 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.046 10:29:46 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:33.046 10:29:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:33.322 10:29:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:33.322 10:29:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:33.322 10:29:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:33.322 10:29:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.579 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:33.579 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:33.579 fio-3.35 00:34:33.579 Starting 2 threads 00:34:33.579 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.142 [2024-04-24 10:29:47.126800] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:34.142 [2024-04-24 10:29:47.126847] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:44.110 00:34:44.110 filename0: (groupid=0, jobs=1): err= 0: pid=517815: Wed Apr 24 10:29:57 2024 00:34:44.110 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10020msec) 00:34:44.110 slat (nsec): min=6072, max=79908, avg=9430.61, stdev=3597.61 00:34:44.110 clat (usec): min=748, max=42973, avg=21521.12, stdev=20306.94 00:34:44.110 lat (usec): min=754, max=42984, avg=21530.55, stdev=20305.12 00:34:44.110 clat percentiles (usec): 00:34:44.110 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 1004], 20.00th=[ 1074], 00:34:44.110 | 30.00th=[ 1106], 40.00th=[ 1483], 50.00th=[41157], 60.00th=[41681], 00:34:44.110 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:44.110 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:44.110 | 99.99th=[42730] 00:34:44.110 bw ( KiB/s): min= 704, max= 768, per=50.02%, avg=742.40, stdev=32.17, samples=20 00:34:44.110 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:34:44.110 lat (usec) : 750=0.05%, 1000=9.30% 00:34:44.110 lat (msec) : 2=40.54%, 50=50.11% 00:34:44.110 cpu : usr=98.03%, sys=1.71%, ctx=29, majf=0, minf=170 00:34:44.110 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.110 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.110 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:44.110 filename1: (groupid=0, jobs=1): err= 0: pid=517816: Wed Apr 24 10:29:57 2024 00:34:44.110 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10020msec) 00:34:44.110 slat (nsec): min=6085, max=79611, avg=9533.11, stdev=3630.59 00:34:44.110 clat (usec): min=710, max=43031, avg=21567.27, stdev=20331.62 00:34:44.110 lat (usec): min=716, max=43042, avg=21576.80, stdev=20329.78 00:34:44.110 clat percentiles (usec): 00:34:44.110 | 1.00th=[ 725], 5.00th=[ 766], 10.00th=[ 979], 20.00th=[ 1172], 00:34:44.110 | 30.00th=[ 1221], 40.00th=[ 1369], 50.00th=[41157], 60.00th=[41681], 00:34:44.110 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:44.110 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:34:44.110 | 99.99th=[43254] 00:34:44.110 bw ( KiB/s): min= 672, max= 768, per=49.88%, avg=740.80, stdev=34.86, samples=20 00:34:44.110 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:34:44.110 lat (usec) : 750=3.88%, 1000=10.56% 00:34:44.110 lat (msec) : 2=35.34%, 50=50.22% 00:34:44.110 cpu : usr=97.82%, sys=1.91%, ctx=23, majf=0, minf=246 00:34:44.110 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.110 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.110 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:44.110 00:34:44.110 Run status group 0 (all jobs): 00:34:44.110 READ: bw=1483KiB/s (1519kB/s), 741KiB/s-743KiB/s (759kB/s-760kB/s), io=14.5MiB (15.2MB), run=10020-10020msec 00:34:44.369 10:29:57 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:44.369 10:29:57 -- target/dif.sh@43 -- # local sub 00:34:44.369 10:29:57 -- target/dif.sh@45 -- # for sub in "$@" 00:34:44.369 10:29:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:44.369 10:29:57 -- target/dif.sh@36 -- # local sub_id=0 00:34:44.369 10:29:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:44.369 10:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.369 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.369 10:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.369 10:29:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:44.369 10:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.369 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.369 10:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.369 10:29:57 -- target/dif.sh@45 -- # for sub in "$@" 00:34:44.369 10:29:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:44.369 10:29:57 -- target/dif.sh@36 -- # local sub_id=1 00:34:44.369 10:29:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:44.369 10:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.369 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.369 10:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.369 10:29:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:44.369 10:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.369 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.369 10:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.369 00:34:44.369 real 0m11.298s 00:34:44.369 user 0m26.354s 00:34:44.369 sys 0m0.640s 00:34:44.369 10:29:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:44.370 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.370 ************************************ 00:34:44.370 END TEST fio_dif_1_multi_subsystems 00:34:44.370 ************************************ 00:34:44.370 10:29:57 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:44.370 10:29:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:44.370 10:29:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:44.370 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.370 ************************************ 00:34:44.370 START TEST fio_dif_rand_params 00:34:44.370 ************************************ 00:34:44.370 10:29:57 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:44.370 10:29:57 -- target/dif.sh@100 -- # local NULL_DIF 00:34:44.370 10:29:57 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:44.370 10:29:57 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:44.370 10:29:57 -- target/dif.sh@103 -- # bs=128k 00:34:44.370 10:29:57 -- target/dif.sh@103 -- # numjobs=3 00:34:44.370 10:29:57 -- target/dif.sh@103 -- # iodepth=3 00:34:44.370 10:29:57 -- target/dif.sh@103 -- # runtime=5 00:34:44.370 10:29:57 -- target/dif.sh@105 -- # create_subsystems 0 00:34:44.370 10:29:57 -- target/dif.sh@28 -- # local sub 00:34:44.370 10:29:57 -- target/dif.sh@30 -- # for sub in "$@" 00:34:44.370 10:29:57 -- target/dif.sh@31 -- # create_subsystem 0 00:34:44.370 10:29:57 -- target/dif.sh@18 -- # local sub_id=0 00:34:44.370 10:29:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:44.370 10:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.370 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.370 bdev_null0 00:34:44.370 10:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.370 10:29:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:44.370 10:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.370 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.370 10:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.370 10:29:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:44.370 10:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.370 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.370 10:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.370 10:29:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:44.370 10:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.370 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:34:44.370 [2024-04-24 10:29:57.580199] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.370 10:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.370 10:29:57 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:44.370 10:29:57 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:44.370 10:29:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:44.370 10:29:57 -- nvmf/common.sh@520 -- # config=() 00:34:44.370 10:29:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:44.370 10:29:57 -- nvmf/common.sh@520 -- # local subsystem config 00:34:44.370 10:29:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:44.370 10:29:57 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:44.370 10:29:57 -- target/dif.sh@82 -- # gen_fio_conf 00:34:44.370 10:29:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:44.370 { 00:34:44.370 "params": { 00:34:44.370 "name": "Nvme$subsystem", 00:34:44.370 "trtype": "$TEST_TRANSPORT", 00:34:44.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:44.370 "adrfam": "ipv4", 00:34:44.370 "trsvcid": "$NVMF_PORT", 00:34:44.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:44.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:44.370 "hdgst": ${hdgst:-false}, 00:34:44.370 "ddgst": ${ddgst:-false} 00:34:44.370 }, 00:34:44.370 "method": "bdev_nvme_attach_controller" 00:34:44.370 } 00:34:44.370 EOF 00:34:44.370 )") 00:34:44.370 10:29:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:44.370 10:29:57 -- target/dif.sh@54 -- # local file 00:34:44.370 10:29:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:44.370 10:29:57 -- target/dif.sh@56 -- # cat 00:34:44.370 10:29:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:44.370 10:29:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:44.370 10:29:57 -- common/autotest_common.sh@1320 -- # shift 00:34:44.370 10:29:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:44.370 10:29:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:44.370 10:29:57 -- nvmf/common.sh@542 -- # cat 00:34:44.370 10:29:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:44.370 10:29:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:44.370 10:29:57 -- target/dif.sh@72 -- # (( file <= files )) 00:34:44.370 10:29:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:44.370 10:29:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:44.370 10:29:57 -- nvmf/common.sh@544 -- # jq . 00:34:44.370 10:29:57 -- nvmf/common.sh@545 -- # IFS=, 00:34:44.370 10:29:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:44.370 "params": { 00:34:44.370 "name": "Nvme0", 00:34:44.370 "trtype": "tcp", 00:34:44.370 "traddr": "10.0.0.2", 00:34:44.370 "adrfam": "ipv4", 00:34:44.370 "trsvcid": "4420", 00:34:44.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:44.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:44.370 "hdgst": false, 00:34:44.370 "ddgst": false 00:34:44.370 }, 00:34:44.370 "method": "bdev_nvme_attach_controller" 00:34:44.370 }' 00:34:44.370 10:29:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:44.370 10:29:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:44.370 10:29:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:44.370 10:29:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:44.370 10:29:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:44.370 10:29:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:44.628 10:29:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:44.628 10:29:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:44.628 10:29:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:44.628 10:29:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:44.886 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:44.886 ... 00:34:44.886 fio-3.35 00:34:44.886 Starting 3 threads 00:34:44.886 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.144 [2024-04-24 10:29:58.255008] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:45.144 [2024-04-24 10:29:58.255065] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:50.406 00:34:50.406 filename0: (groupid=0, jobs=1): err= 0: pid=519776: Wed Apr 24 10:30:03 2024 00:34:50.406 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(159MiB/5004msec) 00:34:50.406 slat (nsec): min=6209, max=35188, avg=10090.24, stdev=2932.11 00:34:50.406 clat (usec): min=3740, max=92395, avg=11815.74, stdev=12949.17 00:34:50.406 lat (usec): min=3747, max=92408, avg=11825.83, stdev=12949.44 00:34:50.406 clat percentiles (usec): 00:34:50.406 | 1.00th=[ 4424], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 6194], 00:34:50.406 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 8225], 00:34:50.406 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[46924], 95.00th=[49546], 00:34:50.406 | 99.00th=[51119], 99.50th=[51119], 99.90th=[89654], 99.95th=[92799], 00:34:50.406 | 99.99th=[92799] 00:34:50.406 bw ( KiB/s): min=27392, max=48384, per=31.48%, avg=32435.20, stdev=7488.02, samples=10 00:34:50.406 iops : min= 214, max= 378, avg=253.40, stdev=58.50, samples=10 00:34:50.406 lat (msec) : 4=0.32%, 10=80.38%, 20=9.30%, 50=6.86%, 100=3.15% 00:34:50.406 cpu : usr=94.62%, sys=5.06%, ctx=7, majf=0, minf=98 00:34:50.406 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.406 issued rwts: total=1269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.406 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:50.406 filename0: (groupid=0, jobs=1): err= 0: pid=519777: Wed Apr 24 10:30:03 2024 00:34:50.406 read: IOPS=281, BW=35.1MiB/s (36.8MB/s)(177MiB/5048msec) 00:34:50.406 slat (nsec): min=4067, max=20142, avg=10229.96, stdev=2710.81 00:34:50.406 clat (usec): min=3940, max=90462, avg=10627.35, stdev=11693.61 00:34:50.406 lat (usec): min=3947, max=90475, avg=10637.58, stdev=11693.83 00:34:50.406 clat percentiles (usec): 00:34:50.406 | 1.00th=[ 4228], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5932], 00:34:50.406 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7898], 00:34:50.406 | 70.00th=[ 8586], 80.00th=[ 9503], 90.00th=[10683], 95.00th=[49021], 00:34:50.406 | 99.00th=[50594], 99.50th=[51119], 99.90th=[89654], 99.95th=[90702], 00:34:50.406 | 99.99th=[90702] 00:34:50.406 bw ( KiB/s): min=16896, max=50176, per=35.18%, avg=36249.60, stdev=10016.55, samples=10 00:34:50.406 iops : min= 132, max= 392, avg=283.20, stdev=78.25, samples=10 00:34:50.406 lat (msec) : 4=0.14%, 10=85.34%, 20=7.05%, 50=5.21%, 100=2.26% 00:34:50.406 cpu : usr=94.23%, sys=5.35%, ctx=12, majf=0, minf=40 00:34:50.406 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.406 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.406 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:50.406 filename0: (groupid=0, jobs=1): err= 0: pid=519778: Wed Apr 24 10:30:03 2024 00:34:50.406 read: IOPS=274, BW=34.4MiB/s (36.0MB/s)(172MiB/5004msec) 00:34:50.406 slat (nsec): min=6171, max=54156, avg=10277.13, stdev=2942.24 00:34:50.406 clat (usec): min=3875, max=91246, avg=10895.34, stdev=12068.41 00:34:50.406 lat (usec): min=3883, max=91259, avg=10905.61, stdev=12068.56 00:34:50.406 clat percentiles (usec): 00:34:50.406 | 1.00th=[ 4228], 5.00th=[ 4883], 10.00th=[ 5211], 20.00th=[ 6128], 00:34:50.406 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7439], 60.00th=[ 8094], 00:34:50.406 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10683], 95.00th=[48497], 00:34:50.406 | 99.00th=[51119], 99.50th=[52167], 99.90th=[90702], 99.95th=[91751], 00:34:50.406 | 99.99th=[91751] 00:34:50.406 bw ( KiB/s): min=23296, max=44544, per=34.14%, avg=35182.40, stdev=7888.94, samples=10 00:34:50.406 iops : min= 182, max= 348, avg=274.80, stdev=61.59, samples=10 00:34:50.407 lat (msec) : 4=0.15%, 10=85.90%, 20=6.03%, 50=6.03%, 100=1.89% 00:34:50.407 cpu : usr=94.66%, sys=4.94%, ctx=10, majf=0, minf=161 00:34:50.407 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.407 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:50.407 00:34:50.407 Run status group 0 (all jobs): 00:34:50.407 READ: bw=101MiB/s (106MB/s), 31.7MiB/s-35.1MiB/s (33.2MB/s-36.8MB/s), io=508MiB (533MB), run=5004-5048msec 00:34:50.407 10:30:03 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:50.407 10:30:03 -- target/dif.sh@43 -- # local sub 00:34:50.407 10:30:03 -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.407 10:30:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:50.407 10:30:03 -- target/dif.sh@36 -- # local sub_id=0 00:34:50.407 10:30:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.407 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.407 10:30:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.407 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.407 10:30:03 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:50.407 10:30:03 -- target/dif.sh@109 -- # bs=4k 00:34:50.407 10:30:03 -- target/dif.sh@109 -- # numjobs=8 00:34:50.407 10:30:03 -- target/dif.sh@109 -- # iodepth=16 00:34:50.407 10:30:03 -- target/dif.sh@109 -- # runtime= 00:34:50.407 10:30:03 -- target/dif.sh@109 -- # files=2 00:34:50.407 10:30:03 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:50.407 10:30:03 -- target/dif.sh@28 -- # local sub 00:34:50.407 10:30:03 -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.407 10:30:03 -- target/dif.sh@31 -- # create_subsystem 0 00:34:50.407 10:30:03 -- target/dif.sh@18 -- # local sub_id=0 00:34:50.407 10:30:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.407 bdev_null0 00:34:50.407 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.407 10:30:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.407 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.407 10:30:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.407 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.407 10:30:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.407 [2024-04-24 10:30:03.649968] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.407 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.407 10:30:03 -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.407 10:30:03 -- target/dif.sh@31 -- # create_subsystem 1 00:34:50.407 10:30:03 -- target/dif.sh@18 -- # local sub_id=1 00:34:50.407 10:30:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.407 bdev_null1 00:34:50.407 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.407 10:30:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.407 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.407 10:30:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.407 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.407 10:30:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:50.407 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.407 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.665 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.665 10:30:03 -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.665 10:30:03 -- target/dif.sh@31 -- # create_subsystem 2 00:34:50.665 10:30:03 -- target/dif.sh@18 -- # local sub_id=2 00:34:50.665 10:30:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:50.665 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.665 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.665 bdev_null2 00:34:50.665 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.665 10:30:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:50.665 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.665 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.665 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.665 10:30:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:50.665 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.665 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.665 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.665 10:30:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:50.665 10:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.665 10:30:03 -- common/autotest_common.sh@10 -- # set +x 00:34:50.665 10:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.665 10:30:03 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:50.665 10:30:03 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:50.665 10:30:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:50.665 10:30:03 -- nvmf/common.sh@520 -- # config=() 00:34:50.665 10:30:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.665 10:30:03 -- nvmf/common.sh@520 -- # local subsystem config 00:34:50.665 10:30:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:50.665 10:30:03 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.665 10:30:03 -- target/dif.sh@82 -- # gen_fio_conf 00:34:50.665 10:30:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:50.665 { 00:34:50.665 "params": { 00:34:50.665 "name": "Nvme$subsystem", 00:34:50.665 "trtype": "$TEST_TRANSPORT", 00:34:50.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.665 "adrfam": "ipv4", 00:34:50.665 "trsvcid": "$NVMF_PORT", 00:34:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.665 "hdgst": ${hdgst:-false}, 00:34:50.665 "ddgst": ${ddgst:-false} 00:34:50.665 }, 00:34:50.665 "method": "bdev_nvme_attach_controller" 00:34:50.665 } 00:34:50.665 EOF 00:34:50.665 )") 00:34:50.665 10:30:03 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:50.665 10:30:03 -- target/dif.sh@54 -- # local file 00:34:50.665 10:30:03 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:50.665 10:30:03 -- target/dif.sh@56 -- # cat 00:34:50.665 10:30:03 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:50.665 10:30:03 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.665 10:30:03 -- common/autotest_common.sh@1320 -- # shift 00:34:50.665 10:30:03 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:50.665 10:30:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.665 10:30:03 -- nvmf/common.sh@542 -- # cat 00:34:50.665 10:30:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:50.665 10:30:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.665 10:30:03 -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.665 10:30:03 -- target/dif.sh@73 -- # cat 00:34:50.665 10:30:03 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:50.665 10:30:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:50.665 10:30:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:50.665 10:30:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:50.665 { 00:34:50.665 "params": { 00:34:50.665 "name": "Nvme$subsystem", 00:34:50.665 "trtype": "$TEST_TRANSPORT", 00:34:50.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.665 "adrfam": "ipv4", 00:34:50.665 "trsvcid": "$NVMF_PORT", 00:34:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.665 "hdgst": ${hdgst:-false}, 00:34:50.665 "ddgst": ${ddgst:-false} 00:34:50.665 }, 00:34:50.665 "method": "bdev_nvme_attach_controller" 00:34:50.665 } 00:34:50.665 EOF 00:34:50.665 )") 00:34:50.665 10:30:03 -- nvmf/common.sh@542 -- # cat 00:34:50.665 10:30:03 -- target/dif.sh@72 -- # (( file++ )) 00:34:50.665 10:30:03 -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.665 10:30:03 -- target/dif.sh@73 -- # cat 00:34:50.665 10:30:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:50.665 10:30:03 -- target/dif.sh@72 -- # (( file++ )) 00:34:50.665 10:30:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:50.665 { 00:34:50.665 "params": { 00:34:50.665 "name": "Nvme$subsystem", 00:34:50.665 "trtype": "$TEST_TRANSPORT", 00:34:50.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.665 "adrfam": "ipv4", 00:34:50.665 "trsvcid": "$NVMF_PORT", 00:34:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.665 "hdgst": ${hdgst:-false}, 00:34:50.665 "ddgst": ${ddgst:-false} 00:34:50.665 }, 00:34:50.665 "method": "bdev_nvme_attach_controller" 00:34:50.665 } 00:34:50.665 EOF 00:34:50.665 )") 00:34:50.665 10:30:03 -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.665 10:30:03 -- nvmf/common.sh@542 -- # cat 00:34:50.665 10:30:03 -- nvmf/common.sh@544 -- # jq . 00:34:50.665 10:30:03 -- nvmf/common.sh@545 -- # IFS=, 00:34:50.665 10:30:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:50.665 "params": { 00:34:50.665 "name": "Nvme0", 00:34:50.665 "trtype": "tcp", 00:34:50.665 "traddr": "10.0.0.2", 00:34:50.665 "adrfam": "ipv4", 00:34:50.665 "trsvcid": "4420", 00:34:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:50.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:50.665 "hdgst": false, 00:34:50.665 "ddgst": false 00:34:50.665 }, 00:34:50.665 "method": "bdev_nvme_attach_controller" 00:34:50.665 },{ 00:34:50.665 "params": { 00:34:50.665 "name": "Nvme1", 00:34:50.665 "trtype": "tcp", 00:34:50.665 "traddr": "10.0.0.2", 00:34:50.665 "adrfam": "ipv4", 00:34:50.665 "trsvcid": "4420", 00:34:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:50.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:50.665 "hdgst": false, 00:34:50.665 "ddgst": false 00:34:50.665 }, 00:34:50.665 "method": "bdev_nvme_attach_controller" 00:34:50.665 },{ 00:34:50.665 "params": { 00:34:50.665 "name": "Nvme2", 00:34:50.665 "trtype": "tcp", 00:34:50.665 "traddr": "10.0.0.2", 00:34:50.665 "adrfam": "ipv4", 00:34:50.665 "trsvcid": "4420", 00:34:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:50.665 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:50.665 "hdgst": false, 00:34:50.665 "ddgst": false 00:34:50.665 }, 00:34:50.665 "method": "bdev_nvme_attach_controller" 00:34:50.665 }' 00:34:50.665 10:30:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:50.665 10:30:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:50.665 10:30:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.665 10:30:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.665 10:30:03 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:50.665 10:30:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:50.665 10:30:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:50.665 10:30:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:50.665 10:30:03 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:50.665 10:30:03 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.923 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:50.923 ... 00:34:50.923 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:50.923 ... 00:34:50.923 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:50.923 ... 00:34:50.923 fio-3.35 00:34:50.923 Starting 24 threads 00:34:50.923 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.521 [2024-04-24 10:30:04.691507] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:51.521 [2024-04-24 10:30:04.691555] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:03.715 00:35:03.715 filename0: (groupid=0, jobs=1): err= 0: pid=521183: Wed Apr 24 10:30:15 2024 00:35:03.715 read: IOPS=626, BW=2507KiB/s (2567kB/s)(24.8MiB/10134msec) 00:35:03.715 slat (nsec): min=4075, max=72793, avg=12400.30, stdev=7268.37 00:35:03.715 clat (msec): min=3, max=143, avg=25.18, stdev= 4.75 00:35:03.715 lat (msec): min=3, max=143, avg=25.19, stdev= 4.75 00:35:03.715 clat percentiles (msec): 00:35:03.715 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 26], 00:35:03.715 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.715 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.715 | 99.00th=[ 33], 99.50th=[ 35], 99.90th=[ 50], 99.95th=[ 50], 00:35:03.715 | 99.99th=[ 144] 00:35:03.715 bw ( KiB/s): min= 2240, max= 3264, per=4.48%, avg=2534.40, stdev=270.01, samples=20 00:35:03.715 iops : min= 560, max= 816, avg=633.60, stdev=67.50, samples=20 00:35:03.715 lat (msec) : 4=0.16%, 10=2.13%, 20=7.46%, 50=90.22%, 250=0.03% 00:35:03.715 cpu : usr=98.93%, sys=0.66%, ctx=18, majf=0, minf=69 00:35:03.715 IO depths : 1=4.3%, 2=9.4%, 4=21.2%, 8=56.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:35:03.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.715 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.715 issued rwts: total=6352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.715 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.715 filename0: (groupid=0, jobs=1): err= 0: pid=521184: Wed Apr 24 10:30:15 2024 00:35:03.715 read: IOPS=595, BW=2380KiB/s (2437kB/s)(23.5MiB/10093msec) 00:35:03.715 slat (usec): min=4, max=108, avg=27.23, stdev=22.44 00:35:03.715 clat (msec): min=7, max=135, avg=26.65, stdev= 6.70 00:35:03.715 lat (msec): min=7, max=135, avg=26.68, stdev= 6.70 00:35:03.715 clat percentiles (msec): 00:35:03.715 | 1.00th=[ 16], 5.00th=[ 19], 10.00th=[ 23], 20.00th=[ 26], 00:35:03.715 | 30.00th=[ 26], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.715 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 34], 00:35:03.715 | 99.00th=[ 45], 99.50th=[ 48], 99.90th=[ 128], 99.95th=[ 136], 00:35:03.715 | 99.99th=[ 136] 00:35:03.715 bw ( KiB/s): min= 2048, max= 2640, per=4.23%, avg=2394.11, stdev=141.56, samples=19 00:35:03.715 iops : min= 512, max= 660, avg=598.53, stdev=35.39, samples=19 00:35:03.715 lat (msec) : 10=0.37%, 20=6.83%, 50=92.49%, 100=0.05%, 250=0.27% 00:35:03.715 cpu : usr=98.94%, sys=0.66%, ctx=15, majf=0, minf=38 00:35:03.715 IO depths : 1=2.3%, 2=5.3%, 4=15.2%, 8=65.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:35:03.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.715 complete : 0=0.0%, 4=92.0%, 8=3.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.715 issued rwts: total=6006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.715 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.715 filename0: (groupid=0, jobs=1): err= 0: pid=521185: Wed Apr 24 10:30:15 2024 00:35:03.715 read: IOPS=591, BW=2366KiB/s (2423kB/s)(23.5MiB/10154msec) 00:35:03.715 slat (usec): min=6, max=122, avg=38.64, stdev=23.60 00:35:03.715 clat (msec): min=12, max=161, avg=26.62, stdev= 5.79 00:35:03.715 lat (msec): min=12, max=161, avg=26.66, stdev= 5.78 00:35:03.715 clat percentiles (msec): 00:35:03.715 | 1.00th=[ 22], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.715 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.716 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.716 | 99.00th=[ 32], 99.50th=[ 39], 99.90th=[ 130], 99.95th=[ 131], 00:35:03.716 | 99.99th=[ 163] 00:35:03.716 bw ( KiB/s): min= 2304, max= 2565, per=4.24%, avg=2396.85, stdev=71.26, samples=20 00:35:03.716 iops : min= 576, max= 641, avg=599.20, stdev=17.78, samples=20 00:35:03.716 lat (msec) : 20=0.80%, 50=98.93%, 250=0.27% 00:35:03.716 cpu : usr=98.72%, sys=0.88%, ctx=15, majf=0, minf=53 00:35:03.716 IO depths : 1=5.8%, 2=11.6%, 4=23.9%, 8=51.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:03.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 issued rwts: total=6007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.716 filename0: (groupid=0, jobs=1): err= 0: pid=521186: Wed Apr 24 10:30:15 2024 00:35:03.716 read: IOPS=591, BW=2364KiB/s (2421kB/s)(23.4MiB/10117msec) 00:35:03.716 slat (usec): min=6, max=167, avg=45.05, stdev=22.69 00:35:03.716 clat (msec): min=9, max=129, avg=26.66, stdev= 5.55 00:35:03.716 lat (msec): min=9, max=129, avg=26.71, stdev= 5.55 00:35:03.716 clat percentiles (msec): 00:35:03.716 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.716 | 30.00th=[ 26], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.716 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.716 | 99.00th=[ 37], 99.50th=[ 42], 99.90th=[ 129], 99.95th=[ 129], 00:35:03.716 | 99.99th=[ 130] 00:35:03.716 bw ( KiB/s): min= 2144, max= 2436, per=4.22%, avg=2386.00, stdev=79.98, samples=20 00:35:03.716 iops : min= 536, max= 609, avg=596.50, stdev=20.00, samples=20 00:35:03.716 lat (msec) : 10=0.03%, 20=0.60%, 50=99.10%, 250=0.27% 00:35:03.716 cpu : usr=99.00%, sys=0.59%, ctx=9, majf=0, minf=51 00:35:03.716 IO depths : 1=5.8%, 2=11.7%, 4=24.5%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:03.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 issued rwts: total=5980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.716 filename0: (groupid=0, jobs=1): err= 0: pid=521187: Wed Apr 24 10:30:15 2024 00:35:03.716 read: IOPS=592, BW=2371KiB/s (2428kB/s)(23.4MiB/10117msec) 00:35:03.716 slat (usec): min=6, max=122, avg=43.10, stdev=23.66 00:35:03.716 clat (msec): min=13, max=129, avg=26.62, stdev= 5.44 00:35:03.716 lat (msec): min=13, max=129, avg=26.66, stdev= 5.44 00:35:03.716 clat percentiles (msec): 00:35:03.716 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.716 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.716 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.716 | 99.00th=[ 32], 99.50th=[ 41], 99.90th=[ 129], 99.95th=[ 129], 00:35:03.716 | 99.99th=[ 130] 00:35:03.716 bw ( KiB/s): min= 2272, max= 2436, per=4.23%, avg=2392.40, stdev=62.90, samples=20 00:35:03.716 iops : min= 568, max= 609, avg=598.10, stdev=15.72, samples=20 00:35:03.716 lat (msec) : 20=0.57%, 50=99.17%, 250=0.27% 00:35:03.716 cpu : usr=98.84%, sys=0.75%, ctx=16, majf=0, minf=37 00:35:03.716 IO depths : 1=5.6%, 2=11.7%, 4=24.5%, 8=51.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:35:03.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 issued rwts: total=5996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.716 filename0: (groupid=0, jobs=1): err= 0: pid=521188: Wed Apr 24 10:30:15 2024 00:35:03.716 read: IOPS=595, BW=2380KiB/s (2437kB/s)(23.5MiB/10100msec) 00:35:03.716 slat (usec): min=5, max=111, avg=27.35, stdev=22.03 00:35:03.716 clat (msec): min=5, max=128, avg=26.61, stdev= 6.25 00:35:03.716 lat (msec): min=5, max=128, avg=26.64, stdev= 6.25 00:35:03.716 clat percentiles (msec): 00:35:03.716 | 1.00th=[ 15], 5.00th=[ 19], 10.00th=[ 24], 20.00th=[ 26], 00:35:03.716 | 30.00th=[ 26], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.716 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 34], 00:35:03.716 | 99.00th=[ 45], 99.50th=[ 48], 99.90th=[ 114], 99.95th=[ 129], 00:35:03.716 | 99.99th=[ 129] 00:35:03.716 bw ( KiB/s): min= 2176, max= 2648, per=4.23%, avg=2394.95, stdev=127.34, samples=19 00:35:03.716 iops : min= 544, max= 662, avg=598.74, stdev=31.83, samples=19 00:35:03.716 lat (msec) : 10=0.47%, 20=6.37%, 50=92.81%, 100=0.18%, 250=0.17% 00:35:03.716 cpu : usr=98.86%, sys=0.75%, ctx=15, majf=0, minf=42 00:35:03.716 IO depths : 1=2.6%, 2=5.7%, 4=14.7%, 8=65.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:03.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 complete : 0=0.0%, 4=91.7%, 8=4.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 issued rwts: total=6010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.716 filename0: (groupid=0, jobs=1): err= 0: pid=521189: Wed Apr 24 10:30:15 2024 00:35:03.716 read: IOPS=588, BW=2353KiB/s (2410kB/s)(23.2MiB/10090msec) 00:35:03.716 slat (usec): min=5, max=106, avg=28.98, stdev=22.42 00:35:03.716 clat (msec): min=7, max=128, avg=26.98, stdev= 5.27 00:35:03.716 lat (msec): min=7, max=128, avg=27.01, stdev= 5.26 00:35:03.716 clat percentiles (msec): 00:35:03.716 | 1.00th=[ 19], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 27], 00:35:03.716 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.716 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 30], 00:35:03.716 | 99.00th=[ 42], 99.50th=[ 44], 99.90th=[ 129], 99.95th=[ 129], 00:35:03.716 | 99.99th=[ 129] 00:35:03.716 bw ( KiB/s): min= 2160, max= 2440, per=4.20%, avg=2373.05, stdev=79.79, samples=19 00:35:03.716 iops : min= 540, max= 610, avg=593.26, stdev=19.95, samples=19 00:35:03.716 lat (msec) : 10=0.15%, 20=1.38%, 50=98.15%, 100=0.12%, 250=0.20% 00:35:03.716 cpu : usr=98.77%, sys=0.82%, ctx=63, majf=0, minf=52 00:35:03.716 IO depths : 1=0.2%, 2=2.6%, 4=10.9%, 8=70.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:35:03.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 complete : 0=0.0%, 4=91.4%, 8=5.9%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.716 filename0: (groupid=0, jobs=1): err= 0: pid=521190: Wed Apr 24 10:30:15 2024 00:35:03.716 read: IOPS=591, BW=2367KiB/s (2423kB/s)(23.4MiB/10114msec) 00:35:03.716 slat (usec): min=4, max=110, avg=43.23, stdev=22.60 00:35:03.716 clat (msec): min=23, max=128, avg=26.66, stdev= 5.59 00:35:03.716 lat (msec): min=23, max=128, avg=26.70, stdev= 5.59 00:35:03.716 clat percentiles (msec): 00:35:03.716 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.716 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.716 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.716 | 99.00th=[ 29], 99.50th=[ 62], 99.90th=[ 129], 99.95th=[ 129], 00:35:03.716 | 99.99th=[ 129] 00:35:03.716 bw ( KiB/s): min= 2180, max= 2436, per=4.22%, avg=2387.80, stdev=74.47, samples=20 00:35:03.716 iops : min= 545, max= 609, avg=596.95, stdev=18.62, samples=20 00:35:03.716 lat (msec) : 50=99.47%, 100=0.27%, 250=0.27% 00:35:03.716 cpu : usr=98.75%, sys=0.86%, ctx=20, majf=0, minf=47 00:35:03.716 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:03.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.716 filename1: (groupid=0, jobs=1): err= 0: pid=521191: Wed Apr 24 10:30:15 2024 00:35:03.716 read: IOPS=581, BW=2325KiB/s (2381kB/s)(22.9MiB/10090msec) 00:35:03.716 slat (usec): min=5, max=115, avg=25.64, stdev=21.96 00:35:03.716 clat (msec): min=7, max=130, avg=27.40, stdev= 6.34 00:35:03.716 lat (msec): min=7, max=130, avg=27.43, stdev= 6.34 00:35:03.716 clat percentiles (msec): 00:35:03.716 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 27], 00:35:03.716 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.716 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 34], 00:35:03.716 | 99.00th=[ 45], 99.50th=[ 47], 99.90th=[ 131], 99.95th=[ 131], 00:35:03.716 | 99.99th=[ 131] 00:35:03.716 bw ( KiB/s): min= 2132, max= 2432, per=4.15%, avg=2347.58, stdev=82.01, samples=19 00:35:03.716 iops : min= 533, max= 608, avg=586.89, stdev=20.50, samples=19 00:35:03.716 lat (msec) : 10=0.03%, 20=1.36%, 50=98.33%, 250=0.27% 00:35:03.716 cpu : usr=98.78%, sys=0.83%, ctx=15, majf=0, minf=76 00:35:03.716 IO depths : 1=0.2%, 2=0.8%, 4=4.9%, 8=77.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:35:03.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 complete : 0=0.0%, 4=90.3%, 8=8.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 issued rwts: total=5866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.716 filename1: (groupid=0, jobs=1): err= 0: pid=521192: Wed Apr 24 10:30:15 2024 00:35:03.716 read: IOPS=587, BW=2350KiB/s (2406kB/s)(23.2MiB/10112msec) 00:35:03.716 slat (usec): min=5, max=587, avg=29.47, stdev=15.67 00:35:03.716 clat (msec): min=10, max=125, avg=26.93, stdev= 4.78 00:35:03.716 lat (msec): min=10, max=125, avg=26.96, stdev= 4.78 00:35:03.716 clat percentiles (msec): 00:35:03.716 | 1.00th=[ 21], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.716 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.716 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 30], 00:35:03.716 | 99.00th=[ 42], 99.50th=[ 59], 99.90th=[ 126], 99.95th=[ 126], 00:35:03.716 | 99.99th=[ 126] 00:35:03.716 bw ( KiB/s): min= 2180, max= 2432, per=4.19%, avg=2370.74, stdev=73.95, samples=19 00:35:03.716 iops : min= 545, max= 608, avg=592.68, stdev=18.49, samples=19 00:35:03.716 lat (msec) : 20=0.88%, 50=98.59%, 100=0.44%, 250=0.10% 00:35:03.716 cpu : usr=96.85%, sys=1.68%, ctx=158, majf=0, minf=55 00:35:03.716 IO depths : 1=3.0%, 2=6.8%, 4=17.5%, 8=62.0%, 16=10.7%, 32=0.0%, >=64=0.0% 00:35:03.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 complete : 0=0.0%, 4=92.6%, 8=2.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.716 issued rwts: total=5940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.716 filename1: (groupid=0, jobs=1): err= 0: pid=521193: Wed Apr 24 10:30:15 2024 00:35:03.717 read: IOPS=550, BW=2203KiB/s (2256kB/s)(21.7MiB/10091msec) 00:35:03.717 slat (usec): min=5, max=123, avg=30.62, stdev=22.84 00:35:03.717 clat (msec): min=5, max=132, avg=28.81, stdev= 7.56 00:35:03.717 lat (msec): min=5, max=133, avg=28.84, stdev= 7.56 00:35:03.717 clat percentiles (msec): 00:35:03.717 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 27], 00:35:03.717 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:35:03.717 | 70.00th=[ 28], 80.00th=[ 32], 90.00th=[ 37], 95.00th=[ 41], 00:35:03.717 | 99.00th=[ 47], 99.50th=[ 54], 99.90th=[ 133], 99.95th=[ 133], 00:35:03.717 | 99.99th=[ 133] 00:35:03.717 bw ( KiB/s): min= 1792, max= 2432, per=3.97%, avg=2243.79, stdev=195.53, samples=19 00:35:03.717 iops : min= 448, max= 608, avg=560.95, stdev=48.88, samples=19 00:35:03.717 lat (msec) : 10=0.49%, 20=1.10%, 50=97.84%, 100=0.29%, 250=0.29% 00:35:03.717 cpu : usr=98.73%, sys=0.87%, ctx=18, majf=0, minf=51 00:35:03.717 IO depths : 1=1.5%, 2=4.5%, 4=17.5%, 8=64.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:03.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 complete : 0=0.0%, 4=93.0%, 8=2.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 issued rwts: total=5557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.717 filename1: (groupid=0, jobs=1): err= 0: pid=521194: Wed Apr 24 10:30:15 2024 00:35:03.717 read: IOPS=577, BW=2309KiB/s (2365kB/s)(22.8MiB/10094msec) 00:35:03.717 slat (usec): min=4, max=112, avg=32.35, stdev=23.54 00:35:03.717 clat (msec): min=10, max=130, avg=27.39, stdev= 5.74 00:35:03.717 lat (msec): min=10, max=130, avg=27.42, stdev= 5.74 00:35:03.717 clat percentiles (msec): 00:35:03.717 | 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.717 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.717 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 30], 95.00th=[ 35], 00:35:03.717 | 99.00th=[ 45], 99.50th=[ 45], 99.90th=[ 131], 99.95th=[ 131], 00:35:03.717 | 99.99th=[ 131] 00:35:03.717 bw ( KiB/s): min= 1888, max= 2464, per=4.12%, avg=2327.58, stdev=139.73, samples=19 00:35:03.717 iops : min= 472, max= 616, avg=581.89, stdev=34.93, samples=19 00:35:03.717 lat (msec) : 20=1.13%, 50=98.59%, 100=0.07%, 250=0.21% 00:35:03.717 cpu : usr=99.08%, sys=0.53%, ctx=19, majf=0, minf=29 00:35:03.717 IO depths : 1=3.3%, 2=6.8%, 4=16.2%, 8=62.6%, 16=11.0%, 32=0.0%, >=64=0.0% 00:35:03.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 complete : 0=0.0%, 4=92.4%, 8=3.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 issued rwts: total=5828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.717 filename1: (groupid=0, jobs=1): err= 0: pid=521195: Wed Apr 24 10:30:15 2024 00:35:03.717 read: IOPS=593, BW=2376KiB/s (2433kB/s)(23.5MiB/10113msec) 00:35:03.717 slat (usec): min=6, max=117, avg=40.69, stdev=23.95 00:35:03.717 clat (msec): min=11, max=130, avg=26.58, stdev= 5.73 00:35:03.717 lat (msec): min=11, max=130, avg=26.62, stdev= 5.73 00:35:03.717 clat percentiles (msec): 00:35:03.717 | 1.00th=[ 19], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.717 | 30.00th=[ 26], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.717 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.717 | 99.00th=[ 40], 99.50th=[ 41], 99.90th=[ 131], 99.95th=[ 131], 00:35:03.717 | 99.99th=[ 131] 00:35:03.717 bw ( KiB/s): min= 2224, max= 2560, per=4.24%, avg=2396.40, stdev=78.22, samples=20 00:35:03.717 iops : min= 556, max= 640, avg=599.10, stdev=19.56, samples=20 00:35:03.717 lat (msec) : 20=1.22%, 50=98.48%, 100=0.03%, 250=0.27% 00:35:03.717 cpu : usr=98.80%, sys=0.79%, ctx=21, majf=0, minf=53 00:35:03.717 IO depths : 1=5.0%, 2=10.4%, 4=22.2%, 8=54.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:35:03.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 issued rwts: total=6006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.717 filename1: (groupid=0, jobs=1): err= 0: pid=521196: Wed Apr 24 10:30:15 2024 00:35:03.717 read: IOPS=615, BW=2464KiB/s (2523kB/s)(24.4MiB/10127msec) 00:35:03.717 slat (nsec): min=6794, max=95343, avg=14272.15, stdev=10459.66 00:35:03.717 clat (msec): min=5, max=130, avg=25.86, stdev= 6.38 00:35:03.717 lat (msec): min=5, max=130, avg=25.88, stdev= 6.38 00:35:03.717 clat percentiles (msec): 00:35:03.717 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 24], 20.00th=[ 26], 00:35:03.717 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.717 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.717 | 99.00th=[ 33], 99.50th=[ 34], 99.90th=[ 131], 99.95th=[ 131], 00:35:03.717 | 99.99th=[ 131] 00:35:03.717 bw ( KiB/s): min= 2304, max= 3360, per=4.40%, avg=2485.70, stdev=231.42, samples=20 00:35:03.717 iops : min= 576, max= 840, avg=621.40, stdev=57.87, samples=20 00:35:03.717 lat (msec) : 10=1.57%, 20=4.62%, 50=93.56%, 250=0.26% 00:35:03.717 cpu : usr=98.91%, sys=0.70%, ctx=16, majf=0, minf=37 00:35:03.717 IO depths : 1=4.8%, 2=10.0%, 4=22.2%, 8=55.3%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:03.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 issued rwts: total=6238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.717 filename1: (groupid=0, jobs=1): err= 0: pid=521197: Wed Apr 24 10:30:15 2024 00:35:03.717 read: IOPS=604, BW=2417KiB/s (2476kB/s)(23.9MiB/10113msec) 00:35:03.717 slat (usec): min=4, max=114, avg=33.77, stdev=23.94 00:35:03.717 clat (msec): min=9, max=113, avg=26.16, stdev= 5.52 00:35:03.717 lat (msec): min=9, max=113, avg=26.19, stdev= 5.52 00:35:03.717 clat percentiles (msec): 00:35:03.717 | 1.00th=[ 16], 5.00th=[ 20], 10.00th=[ 25], 20.00th=[ 26], 00:35:03.717 | 30.00th=[ 26], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.717 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.717 | 99.00th=[ 39], 99.50th=[ 61], 99.90th=[ 114], 99.95th=[ 114], 00:35:03.717 | 99.99th=[ 114] 00:35:03.717 bw ( KiB/s): min= 2176, max= 2912, per=4.31%, avg=2439.58, stdev=166.48, samples=19 00:35:03.717 iops : min= 544, max= 728, avg=609.89, stdev=41.62, samples=19 00:35:03.717 lat (msec) : 10=0.03%, 20=5.61%, 50=93.83%, 100=0.29%, 250=0.23% 00:35:03.717 cpu : usr=98.98%, sys=0.62%, ctx=14, majf=0, minf=38 00:35:03.717 IO depths : 1=4.6%, 2=9.5%, 4=20.4%, 8=57.0%, 16=8.6%, 32=0.0%, >=64=0.0% 00:35:03.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.717 filename1: (groupid=0, jobs=1): err= 0: pid=521198: Wed Apr 24 10:30:15 2024 00:35:03.717 read: IOPS=589, BW=2358KiB/s (2415kB/s)(23.3MiB/10100msec) 00:35:03.717 slat (usec): min=6, max=115, avg=42.13, stdev=23.21 00:35:03.717 clat (msec): min=12, max=128, avg=26.76, stdev= 5.74 00:35:03.717 lat (msec): min=12, max=128, avg=26.80, stdev= 5.73 00:35:03.717 clat percentiles (msec): 00:35:03.717 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.717 | 30.00th=[ 26], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.717 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.717 | 99.00th=[ 41], 99.50th=[ 50], 99.90th=[ 129], 99.95th=[ 129], 00:35:03.717 | 99.99th=[ 129] 00:35:03.717 bw ( KiB/s): min= 2180, max= 2432, per=4.21%, avg=2379.16, stdev=78.19, samples=19 00:35:03.717 iops : min= 545, max= 608, avg=594.79, stdev=19.55, samples=19 00:35:03.717 lat (msec) : 20=0.44%, 50=99.29%, 250=0.27% 00:35:03.717 cpu : usr=98.80%, sys=0.79%, ctx=18, majf=0, minf=37 00:35:03.717 IO depths : 1=4.9%, 2=10.6%, 4=23.6%, 8=53.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:35:03.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 issued rwts: total=5954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.717 filename2: (groupid=0, jobs=1): err= 0: pid=521199: Wed Apr 24 10:30:15 2024 00:35:03.717 read: IOPS=595, BW=2381KiB/s (2438kB/s)(23.5MiB/10112msec) 00:35:03.717 slat (usec): min=4, max=102, avg=23.14, stdev=18.51 00:35:03.717 clat (msec): min=11, max=128, avg=26.66, stdev= 5.71 00:35:03.717 lat (msec): min=11, max=128, avg=26.68, stdev= 5.71 00:35:03.717 clat percentiles (msec): 00:35:03.717 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.717 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.717 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.717 | 99.00th=[ 34], 99.50th=[ 59], 99.90th=[ 129], 99.95th=[ 129], 00:35:03.717 | 99.99th=[ 129] 00:35:03.717 bw ( KiB/s): min= 2180, max= 2544, per=4.26%, avg=2407.79, stdev=79.07, samples=19 00:35:03.717 iops : min= 545, max= 636, avg=601.95, stdev=19.77, samples=19 00:35:03.717 lat (msec) : 20=1.66%, 50=97.81%, 100=0.27%, 250=0.27% 00:35:03.717 cpu : usr=98.88%, sys=0.70%, ctx=12, majf=0, minf=61 00:35:03.717 IO depths : 1=4.3%, 2=9.8%, 4=22.5%, 8=54.9%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:03.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.717 issued rwts: total=6020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.717 filename2: (groupid=0, jobs=1): err= 0: pid=521200: Wed Apr 24 10:30:15 2024 00:35:03.717 read: IOPS=581, BW=2325KiB/s (2381kB/s)(22.9MiB/10091msec) 00:35:03.717 slat (usec): min=4, max=104, avg=22.11, stdev=19.82 00:35:03.717 clat (msec): min=6, max=128, avg=27.40, stdev= 6.84 00:35:03.717 lat (msec): min=6, max=128, avg=27.42, stdev= 6.84 00:35:03.717 clat percentiles (msec): 00:35:03.717 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 27], 00:35:03.717 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.717 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 31], 95.00th=[ 38], 00:35:03.717 | 99.00th=[ 46], 99.50th=[ 46], 99.90th=[ 129], 99.95th=[ 129], 00:35:03.717 | 99.99th=[ 129] 00:35:03.717 bw ( KiB/s): min= 2180, max= 2432, per=4.15%, avg=2346.32, stdev=60.24, samples=19 00:35:03.717 iops : min= 545, max= 608, avg=586.58, stdev=15.06, samples=19 00:35:03.717 lat (msec) : 10=1.38%, 20=1.99%, 50=96.35%, 250=0.27% 00:35:03.717 cpu : usr=98.85%, sys=0.73%, ctx=16, majf=0, minf=30 00:35:03.717 IO depths : 1=0.3%, 2=2.0%, 4=9.5%, 8=73.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:35:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 complete : 0=0.0%, 4=91.1%, 8=5.7%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 issued rwts: total=5866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.718 filename2: (groupid=0, jobs=1): err= 0: pid=521201: Wed Apr 24 10:30:15 2024 00:35:03.718 read: IOPS=577, BW=2311KiB/s (2366kB/s)(22.8MiB/10096msec) 00:35:03.718 slat (usec): min=6, max=111, avg=25.92, stdev=21.37 00:35:03.718 clat (msec): min=6, max=129, avg=27.51, stdev= 7.41 00:35:03.718 lat (msec): min=6, max=129, avg=27.54, stdev= 7.41 00:35:03.718 clat percentiles (msec): 00:35:03.718 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.718 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.718 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 32], 95.00th=[ 41], 00:35:03.718 | 99.00th=[ 46], 99.50th=[ 47], 99.90th=[ 129], 99.95th=[ 129], 00:35:03.718 | 99.99th=[ 130] 00:35:03.718 bw ( KiB/s): min= 2144, max= 2432, per=4.12%, avg=2328.00, stdev=86.62, samples=19 00:35:03.718 iops : min= 536, max= 608, avg=582.00, stdev=21.65, samples=19 00:35:03.718 lat (msec) : 10=1.27%, 20=3.19%, 50=95.27%, 250=0.27% 00:35:03.718 cpu : usr=98.88%, sys=0.71%, ctx=15, majf=0, minf=54 00:35:03.718 IO depths : 1=0.9%, 2=2.9%, 4=11.7%, 8=70.2%, 16=14.2%, 32=0.0%, >=64=0.0% 00:35:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 complete : 0=0.0%, 4=91.6%, 8=5.0%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 issued rwts: total=5833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.718 filename2: (groupid=0, jobs=1): err= 0: pid=521202: Wed Apr 24 10:30:15 2024 00:35:03.718 read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.4MiB/10100msec) 00:35:03.718 slat (usec): min=6, max=119, avg=36.18, stdev=24.61 00:35:03.718 clat (msec): min=6, max=124, avg=26.67, stdev= 6.03 00:35:03.718 lat (msec): min=6, max=124, avg=26.70, stdev= 6.03 00:35:03.718 clat percentiles (msec): 00:35:03.718 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 25], 20.00th=[ 26], 00:35:03.718 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.718 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 33], 00:35:03.718 | 99.00th=[ 46], 99.50th=[ 48], 99.90th=[ 125], 99.95th=[ 125], 00:35:03.718 | 99.99th=[ 125] 00:35:03.718 bw ( KiB/s): min= 2304, max= 2496, per=4.23%, avg=2391.58, stdev=60.90, samples=19 00:35:03.718 iops : min= 576, max= 624, avg=597.89, stdev=15.22, samples=19 00:35:03.718 lat (msec) : 10=2.11%, 20=1.91%, 50=95.63%, 100=0.25%, 250=0.10% 00:35:03.718 cpu : usr=98.67%, sys=0.92%, ctx=16, majf=0, minf=41 00:35:03.718 IO depths : 1=1.4%, 2=5.3%, 4=18.9%, 8=62.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:35:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 complete : 0=0.0%, 4=93.1%, 8=1.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 issued rwts: total=5978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.718 filename2: (groupid=0, jobs=1): err= 0: pid=521203: Wed Apr 24 10:30:15 2024 00:35:03.718 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.4MiB/10113msec) 00:35:03.718 slat (usec): min=4, max=112, avg=42.84, stdev=21.85 00:35:03.718 clat (msec): min=21, max=129, avg=26.64, stdev= 5.58 00:35:03.718 lat (msec): min=21, max=129, avg=26.68, stdev= 5.57 00:35:03.718 clat percentiles (msec): 00:35:03.718 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.718 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.718 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:35:03.718 | 99.00th=[ 30], 99.50th=[ 59], 99.90th=[ 129], 99.95th=[ 129], 00:35:03.718 | 99.99th=[ 130] 00:35:03.718 bw ( KiB/s): min= 2180, max= 2432, per=4.22%, avg=2385.05, stdev=75.85, samples=19 00:35:03.718 iops : min= 545, max= 608, avg=596.26, stdev=18.96, samples=19 00:35:03.718 lat (msec) : 50=99.47%, 100=0.27%, 250=0.27% 00:35:03.718 cpu : usr=98.97%, sys=0.62%, ctx=8, majf=0, minf=48 00:35:03.718 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.718 filename2: (groupid=0, jobs=1): err= 0: pid=521204: Wed Apr 24 10:30:15 2024 00:35:03.718 read: IOPS=608, BW=2432KiB/s (2491kB/s)(23.8MiB/10008msec) 00:35:03.718 slat (usec): min=5, max=107, avg=19.49, stdev=19.22 00:35:03.718 clat (usec): min=3595, max=47578, avg=26161.77, stdev=2942.65 00:35:03.718 lat (usec): min=3616, max=47593, avg=26181.25, stdev=2943.02 00:35:03.718 clat percentiles (usec): 00:35:03.718 | 1.00th=[11469], 5.00th=[23987], 10.00th=[25297], 20.00th=[25822], 00:35:03.718 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:35:03.718 | 70.00th=[26870], 80.00th=[27132], 90.00th=[27395], 95.00th=[27919], 00:35:03.718 | 99.00th=[32637], 99.50th=[33817], 99.90th=[47449], 99.95th=[47449], 00:35:03.718 | 99.99th=[47449] 00:35:03.718 bw ( KiB/s): min= 2304, max= 2688, per=4.29%, avg=2428.00, stdev=92.41, samples=20 00:35:03.718 iops : min= 576, max= 672, avg=607.00, stdev=23.10, samples=20 00:35:03.718 lat (msec) : 4=0.12%, 10=0.74%, 20=2.60%, 50=96.55% 00:35:03.718 cpu : usr=98.93%, sys=0.68%, ctx=15, majf=0, minf=37 00:35:03.718 IO depths : 1=3.6%, 2=8.6%, 4=21.4%, 8=57.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 issued rwts: total=6086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.718 filename2: (groupid=0, jobs=1): err= 0: pid=521205: Wed Apr 24 10:30:15 2024 00:35:03.718 read: IOPS=606, BW=2425KiB/s (2483kB/s)(23.7MiB/10004msec) 00:35:03.718 slat (usec): min=4, max=111, avg=27.32, stdev=14.78 00:35:03.718 clat (usec): min=4393, max=48429, avg=26176.69, stdev=3338.58 00:35:03.718 lat (usec): min=4405, max=48437, avg=26204.02, stdev=3339.61 00:35:03.718 clat percentiles (usec): 00:35:03.718 | 1.00th=[ 9634], 5.00th=[22414], 10.00th=[25297], 20.00th=[25822], 00:35:03.718 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:35:03.718 | 70.00th=[26870], 80.00th=[27132], 90.00th=[27395], 95.00th=[28181], 00:35:03.718 | 99.00th=[36439], 99.50th=[41157], 99.90th=[47449], 99.95th=[48497], 00:35:03.718 | 99.99th=[48497] 00:35:03.718 bw ( KiB/s): min= 2304, max= 2664, per=4.29%, avg=2425.68, stdev=88.23, samples=19 00:35:03.718 iops : min= 576, max= 666, avg=606.42, stdev=22.06, samples=19 00:35:03.718 lat (msec) : 10=1.01%, 20=2.13%, 50=96.87% 00:35:03.718 cpu : usr=95.73%, sys=2.16%, ctx=150, majf=0, minf=63 00:35:03.718 IO depths : 1=3.7%, 2=8.5%, 4=21.9%, 8=56.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 issued rwts: total=6065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.718 filename2: (groupid=0, jobs=1): err= 0: pid=521206: Wed Apr 24 10:30:15 2024 00:35:03.718 read: IOPS=584, BW=2340KiB/s (2396kB/s)(23.1MiB/10113msec) 00:35:03.718 slat (usec): min=4, max=116, avg=22.61, stdev=20.06 00:35:03.718 clat (msec): min=9, max=135, avg=26.98, stdev= 4.53 00:35:03.718 lat (msec): min=9, max=135, avg=27.00, stdev= 4.53 00:35:03.718 clat percentiles (msec): 00:35:03.718 | 1.00th=[ 17], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 26], 00:35:03.718 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 27], 00:35:03.718 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 33], 00:35:03.718 | 99.00th=[ 44], 99.50th=[ 46], 99.90th=[ 48], 99.95th=[ 136], 00:35:03.718 | 99.99th=[ 136] 00:35:03.718 bw ( KiB/s): min= 2176, max= 2501, per=4.18%, avg=2365.25, stdev=78.26, samples=20 00:35:03.718 iops : min= 544, max= 625, avg=591.30, stdev=19.54, samples=20 00:35:03.718 lat (msec) : 10=0.25%, 20=1.93%, 50=97.75%, 250=0.07% 00:35:03.718 cpu : usr=98.79%, sys=0.81%, ctx=11, majf=0, minf=83 00:35:03.718 IO depths : 1=1.2%, 2=3.1%, 4=11.1%, 8=70.6%, 16=14.1%, 32=0.0%, >=64=0.0% 00:35:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 complete : 0=0.0%, 4=91.6%, 8=5.3%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.718 issued rwts: total=5916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:03.718 00:35:03.718 Run status group 0 (all jobs): 00:35:03.718 READ: bw=55.2MiB/s (57.9MB/s), 2203KiB/s-2507KiB/s (2256kB/s-2567kB/s), io=561MiB (588MB), run=10004-10154msec 00:35:03.718 10:30:15 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:03.718 10:30:15 -- target/dif.sh@43 -- # local sub 00:35:03.718 10:30:15 -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.718 10:30:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:03.718 10:30:15 -- target/dif.sh@36 -- # local sub_id=0 00:35:03.718 10:30:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:03.718 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.718 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.718 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.718 10:30:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:03.718 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.718 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.718 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.718 10:30:15 -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.718 10:30:15 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:03.718 10:30:15 -- target/dif.sh@36 -- # local sub_id=1 00:35:03.718 10:30:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.719 10:30:15 -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:03.719 10:30:15 -- target/dif.sh@36 -- # local sub_id=2 00:35:03.719 10:30:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@115 -- # NULL_DIF=1 00:35:03.719 10:30:15 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:03.719 10:30:15 -- target/dif.sh@115 -- # numjobs=2 00:35:03.719 10:30:15 -- target/dif.sh@115 -- # iodepth=8 00:35:03.719 10:30:15 -- target/dif.sh@115 -- # runtime=5 00:35:03.719 10:30:15 -- target/dif.sh@115 -- # files=1 00:35:03.719 10:30:15 -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:03.719 10:30:15 -- target/dif.sh@28 -- # local sub 00:35:03.719 10:30:15 -- target/dif.sh@30 -- # for sub in "$@" 00:35:03.719 10:30:15 -- target/dif.sh@31 -- # create_subsystem 0 00:35:03.719 10:30:15 -- target/dif.sh@18 -- # local sub_id=0 00:35:03.719 10:30:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 bdev_null0 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 [2024-04-24 10:30:15.300453] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@30 -- # for sub in "$@" 00:35:03.719 10:30:15 -- target/dif.sh@31 -- # create_subsystem 1 00:35:03.719 10:30:15 -- target/dif.sh@18 -- # local sub_id=1 00:35:03.719 10:30:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 bdev_null1 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:03.719 10:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.719 10:30:15 -- common/autotest_common.sh@10 -- # set +x 00:35:03.719 10:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.719 10:30:15 -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:03.719 10:30:15 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:03.719 10:30:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:03.719 10:30:15 -- nvmf/common.sh@520 -- # config=() 00:35:03.719 10:30:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.719 10:30:15 -- nvmf/common.sh@520 -- # local subsystem config 00:35:03.719 10:30:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:03.719 10:30:15 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.719 10:30:15 -- target/dif.sh@82 -- # gen_fio_conf 00:35:03.719 10:30:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:03.719 { 00:35:03.719 "params": { 00:35:03.719 "name": "Nvme$subsystem", 00:35:03.719 "trtype": "$TEST_TRANSPORT", 00:35:03.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.719 "adrfam": "ipv4", 00:35:03.719 "trsvcid": "$NVMF_PORT", 00:35:03.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.719 "hdgst": ${hdgst:-false}, 00:35:03.719 "ddgst": ${ddgst:-false} 00:35:03.719 }, 00:35:03.719 "method": "bdev_nvme_attach_controller" 00:35:03.719 } 00:35:03.719 EOF 00:35:03.719 )") 00:35:03.719 10:30:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:03.719 10:30:15 -- target/dif.sh@54 -- # local file 00:35:03.719 10:30:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.719 10:30:15 -- target/dif.sh@56 -- # cat 00:35:03.719 10:30:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:03.719 10:30:15 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.719 10:30:15 -- common/autotest_common.sh@1320 -- # shift 00:35:03.719 10:30:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:03.719 10:30:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.719 10:30:15 -- nvmf/common.sh@542 -- # cat 00:35:03.719 10:30:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.719 10:30:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:03.719 10:30:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:03.719 10:30:15 -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.719 10:30:15 -- target/dif.sh@73 -- # cat 00:35:03.719 10:30:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:03.719 10:30:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:03.719 10:30:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:03.719 { 00:35:03.719 "params": { 00:35:03.719 "name": "Nvme$subsystem", 00:35:03.719 "trtype": "$TEST_TRANSPORT", 00:35:03.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.719 "adrfam": "ipv4", 00:35:03.719 "trsvcid": "$NVMF_PORT", 00:35:03.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.719 "hdgst": ${hdgst:-false}, 00:35:03.719 "ddgst": ${ddgst:-false} 00:35:03.719 }, 00:35:03.719 "method": "bdev_nvme_attach_controller" 00:35:03.719 } 00:35:03.719 EOF 00:35:03.719 )") 00:35:03.719 10:30:15 -- target/dif.sh@72 -- # (( file++ )) 00:35:03.719 10:30:15 -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.719 10:30:15 -- nvmf/common.sh@542 -- # cat 00:35:03.719 10:30:15 -- nvmf/common.sh@544 -- # jq . 00:35:03.719 10:30:15 -- nvmf/common.sh@545 -- # IFS=, 00:35:03.719 10:30:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:03.719 "params": { 00:35:03.719 "name": "Nvme0", 00:35:03.719 "trtype": "tcp", 00:35:03.719 "traddr": "10.0.0.2", 00:35:03.719 "adrfam": "ipv4", 00:35:03.719 "trsvcid": "4420", 00:35:03.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.719 "hdgst": false, 00:35:03.719 "ddgst": false 00:35:03.719 }, 00:35:03.719 "method": "bdev_nvme_attach_controller" 00:35:03.719 },{ 00:35:03.719 "params": { 00:35:03.719 "name": "Nvme1", 00:35:03.719 "trtype": "tcp", 00:35:03.719 "traddr": "10.0.0.2", 00:35:03.719 "adrfam": "ipv4", 00:35:03.719 "trsvcid": "4420", 00:35:03.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:03.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:03.719 "hdgst": false, 00:35:03.719 "ddgst": false 00:35:03.719 }, 00:35:03.719 "method": "bdev_nvme_attach_controller" 00:35:03.719 }' 00:35:03.719 10:30:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:03.719 10:30:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:03.719 10:30:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.719 10:30:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.719 10:30:15 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:35:03.719 10:30:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:03.719 10:30:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:03.719 10:30:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:03.719 10:30:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:03.719 10:30:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.719 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:03.719 ... 00:35:03.719 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:03.719 ... 00:35:03.719 fio-3.35 00:35:03.719 Starting 4 threads 00:35:03.719 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.720 [2024-04-24 10:30:16.337034] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:03.720 [2024-04-24 10:30:16.337075] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:08.987 00:35:08.987 filename0: (groupid=0, jobs=1): err= 0: pid=523583: Wed Apr 24 10:30:21 2024 00:35:08.987 read: IOPS=2706, BW=21.1MiB/s (22.2MB/s)(106MiB/5002msec) 00:35:08.987 slat (nsec): min=4028, max=33172, avg=8983.94, stdev=2799.07 00:35:08.987 clat (usec): min=1551, max=44304, avg=2931.59, stdev=1102.60 00:35:08.987 lat (usec): min=1558, max=44322, avg=2940.57, stdev=1102.58 00:35:08.987 clat percentiles (usec): 00:35:08.987 | 1.00th=[ 2040], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2638], 00:35:08.987 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2900], 00:35:08.987 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3556], 95.00th=[ 3949], 00:35:08.987 | 99.00th=[ 4490], 99.50th=[ 4490], 99.90th=[ 4817], 99.95th=[44303], 00:35:08.987 | 99.99th=[44303] 00:35:08.987 bw ( KiB/s): min=20080, max=22448, per=25.11%, avg=21760.00, stdev=732.99, samples=9 00:35:08.987 iops : min= 2510, max= 2806, avg=2720.00, stdev=91.62, samples=9 00:35:08.987 lat (msec) : 2=0.80%, 4=94.67%, 10=4.48%, 50=0.06% 00:35:08.987 cpu : usr=96.22%, sys=3.46%, ctx=10, majf=0, minf=31 00:35:08.987 IO depths : 1=0.2%, 2=1.4%, 4=69.5%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.987 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.987 issued rwts: total=13536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:08.987 filename0: (groupid=0, jobs=1): err= 0: pid=523584: Wed Apr 24 10:30:21 2024 00:35:08.987 read: IOPS=2693, BW=21.0MiB/s (22.1MB/s)(105MiB/5002msec) 00:35:08.987 slat (nsec): min=4012, max=28728, avg=9040.40, stdev=2889.53 00:35:08.987 clat (usec): min=1542, max=43658, avg=2946.37, stdev=1071.87 00:35:08.987 lat (usec): min=1549, max=43671, avg=2955.41, stdev=1071.72 00:35:08.987 clat percentiles (usec): 00:35:08.987 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2704], 00:35:08.987 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2900], 00:35:08.987 | 70.00th=[ 2933], 80.00th=[ 3032], 90.00th=[ 3458], 95.00th=[ 3851], 00:35:08.987 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4686], 99.95th=[43779], 00:35:08.987 | 99.99th=[43779] 00:35:08.987 bw ( KiB/s): min=19872, max=22208, per=24.79%, avg=21486.22, stdev=694.59, samples=9 00:35:08.987 iops : min= 2484, max= 2776, avg=2685.78, stdev=86.82, samples=9 00:35:08.987 lat (msec) : 2=0.59%, 4=95.20%, 10=4.16%, 50=0.06% 00:35:08.988 cpu : usr=96.34%, sys=3.36%, ctx=9, majf=0, minf=69 00:35:08.988 IO depths : 1=0.1%, 2=1.1%, 4=68.2%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.988 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.988 issued rwts: total=13473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.988 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:08.988 filename1: (groupid=0, jobs=1): err= 0: pid=523585: Wed Apr 24 10:30:21 2024 00:35:08.988 read: IOPS=2690, BW=21.0MiB/s (22.0MB/s)(105MiB/5001msec) 00:35:08.988 slat (nsec): min=6103, max=43036, avg=9042.63, stdev=3038.06 00:35:08.988 clat (usec): min=945, max=45075, avg=2949.68, stdev=1098.35 00:35:08.988 lat (usec): min=952, max=45099, avg=2958.72, stdev=1098.36 00:35:08.988 clat percentiles (usec): 00:35:08.988 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2704], 00:35:08.988 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2900], 00:35:08.988 | 70.00th=[ 2933], 80.00th=[ 3064], 90.00th=[ 3425], 95.00th=[ 3785], 00:35:08.988 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 5014], 99.95th=[44827], 00:35:08.988 | 99.99th=[44827] 00:35:08.988 bw ( KiB/s): min=19591, max=22368, per=24.78%, avg=21479.89, stdev=835.97, samples=9 00:35:08.988 iops : min= 2448, max= 2796, avg=2684.89, stdev=104.74, samples=9 00:35:08.988 lat (usec) : 1000=0.02% 00:35:08.988 lat (msec) : 2=0.32%, 4=96.14%, 10=3.46%, 50=0.06% 00:35:08.988 cpu : usr=96.48%, sys=3.18%, ctx=7, majf=0, minf=47 00:35:08.988 IO depths : 1=0.2%, 2=1.3%, 4=67.4%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.988 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.988 issued rwts: total=13456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.988 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:08.988 filename1: (groupid=0, jobs=1): err= 0: pid=523586: Wed Apr 24 10:30:21 2024 00:35:08.988 read: IOPS=2742, BW=21.4MiB/s (22.5MB/s)(107MiB/5002msec) 00:35:08.988 slat (nsec): min=6061, max=38234, avg=9046.04, stdev=3014.79 00:35:08.988 clat (usec): min=1072, max=5333, avg=2891.22, stdev=481.58 00:35:08.988 lat (usec): min=1078, max=5339, avg=2900.26, stdev=481.24 00:35:08.988 clat percentiles (usec): 00:35:08.988 | 1.00th=[ 1237], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2671], 00:35:08.988 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2900], 00:35:08.988 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3523], 95.00th=[ 3949], 00:35:08.988 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 4883], 99.95th=[ 4948], 00:35:08.988 | 99.99th=[ 5342] 00:35:08.988 bw ( KiB/s): min=21152, max=24016, per=25.27%, avg=21900.44, stdev=883.40, samples=9 00:35:08.988 iops : min= 2644, max= 3002, avg=2737.56, stdev=110.43, samples=9 00:35:08.988 lat (msec) : 2=2.43%, 4=93.61%, 10=3.97% 00:35:08.988 cpu : usr=96.24%, sys=3.42%, ctx=8, majf=0, minf=41 00:35:08.988 IO depths : 1=0.1%, 2=0.9%, 4=71.2%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.988 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.988 issued rwts: total=13720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.988 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:08.988 00:35:08.988 Run status group 0 (all jobs): 00:35:08.988 READ: bw=84.6MiB/s (88.7MB/s), 21.0MiB/s-21.4MiB/s (22.0MB/s-22.5MB/s), io=423MiB (444MB), run=5001-5002msec 00:35:08.988 10:30:21 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:08.988 10:30:21 -- target/dif.sh@43 -- # local sub 00:35:08.988 10:30:21 -- target/dif.sh@45 -- # for sub in "$@" 00:35:08.988 10:30:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:08.988 10:30:21 -- target/dif.sh@36 -- # local sub_id=0 00:35:08.988 10:30:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:08.988 10:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 10:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:08.988 10:30:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:08.988 10:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 10:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:08.988 10:30:21 -- target/dif.sh@45 -- # for sub in "$@" 00:35:08.988 10:30:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:08.988 10:30:21 -- target/dif.sh@36 -- # local sub_id=1 00:35:08.988 10:30:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:08.988 10:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 10:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:08.988 10:30:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:08.988 10:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 10:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:08.988 00:35:08.988 real 0m24.130s 00:35:08.988 user 4m53.841s 00:35:08.988 sys 0m4.473s 00:35:08.988 10:30:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 ************************************ 00:35:08.988 END TEST fio_dif_rand_params 00:35:08.988 ************************************ 00:35:08.988 10:30:21 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:08.988 10:30:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:08.988 10:30:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 ************************************ 00:35:08.988 START TEST fio_dif_digest 00:35:08.988 ************************************ 00:35:08.988 10:30:21 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:35:08.988 10:30:21 -- target/dif.sh@123 -- # local NULL_DIF 00:35:08.988 10:30:21 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:08.988 10:30:21 -- target/dif.sh@125 -- # local hdgst ddgst 00:35:08.988 10:30:21 -- target/dif.sh@127 -- # NULL_DIF=3 00:35:08.988 10:30:21 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:08.988 10:30:21 -- target/dif.sh@127 -- # numjobs=3 00:35:08.988 10:30:21 -- target/dif.sh@127 -- # iodepth=3 00:35:08.988 10:30:21 -- target/dif.sh@127 -- # runtime=10 00:35:08.988 10:30:21 -- target/dif.sh@128 -- # hdgst=true 00:35:08.988 10:30:21 -- target/dif.sh@128 -- # ddgst=true 00:35:08.988 10:30:21 -- target/dif.sh@130 -- # create_subsystems 0 00:35:08.988 10:30:21 -- target/dif.sh@28 -- # local sub 00:35:08.988 10:30:21 -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.988 10:30:21 -- target/dif.sh@31 -- # create_subsystem 0 00:35:08.988 10:30:21 -- target/dif.sh@18 -- # local sub_id=0 00:35:08.988 10:30:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:08.988 10:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 bdev_null0 00:35:08.988 10:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:08.988 10:30:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:08.988 10:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 10:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:08.988 10:30:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:08.988 10:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 10:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:08.988 10:30:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:08.988 10:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:08.988 10:30:21 -- common/autotest_common.sh@10 -- # set +x 00:35:08.988 [2024-04-24 10:30:21.748296] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.988 10:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:08.988 10:30:21 -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:08.988 10:30:21 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:08.988 10:30:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:08.988 10:30:21 -- nvmf/common.sh@520 -- # config=() 00:35:08.988 10:30:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.988 10:30:21 -- nvmf/common.sh@520 -- # local subsystem config 00:35:08.988 10:30:21 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.988 10:30:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:08.988 10:30:21 -- target/dif.sh@82 -- # gen_fio_conf 00:35:08.988 10:30:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:08.988 { 00:35:08.988 "params": { 00:35:08.988 "name": "Nvme$subsystem", 00:35:08.988 "trtype": "$TEST_TRANSPORT", 00:35:08.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.988 "adrfam": "ipv4", 00:35:08.988 "trsvcid": "$NVMF_PORT", 00:35:08.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.988 "hdgst": ${hdgst:-false}, 00:35:08.988 "ddgst": ${ddgst:-false} 00:35:08.988 }, 00:35:08.988 "method": "bdev_nvme_attach_controller" 00:35:08.988 } 00:35:08.988 EOF 00:35:08.988 )") 00:35:08.988 10:30:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:08.988 10:30:21 -- target/dif.sh@54 -- # local file 00:35:08.988 10:30:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:08.988 10:30:21 -- target/dif.sh@56 -- # cat 00:35:08.988 10:30:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:08.988 10:30:21 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.988 10:30:21 -- common/autotest_common.sh@1320 -- # shift 00:35:08.988 10:30:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:08.988 10:30:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.988 10:30:21 -- nvmf/common.sh@542 -- # cat 00:35:08.988 10:30:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:08.988 10:30:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.988 10:30:21 -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.989 10:30:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:08.989 10:30:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:08.989 10:30:21 -- nvmf/common.sh@544 -- # jq . 00:35:08.989 10:30:21 -- nvmf/common.sh@545 -- # IFS=, 00:35:08.989 10:30:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:08.989 "params": { 00:35:08.989 "name": "Nvme0", 00:35:08.989 "trtype": "tcp", 00:35:08.989 "traddr": "10.0.0.2", 00:35:08.989 "adrfam": "ipv4", 00:35:08.989 "trsvcid": "4420", 00:35:08.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.989 "hdgst": true, 00:35:08.989 "ddgst": true 00:35:08.989 }, 00:35:08.989 "method": "bdev_nvme_attach_controller" 00:35:08.989 }' 00:35:08.989 10:30:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:08.989 10:30:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:08.989 10:30:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.989 10:30:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.989 10:30:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:35:08.989 10:30:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:08.989 10:30:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:08.989 10:30:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:08.989 10:30:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:08.989 10:30:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.989 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:08.989 ... 00:35:08.989 fio-3.35 00:35:08.989 Starting 3 threads 00:35:08.989 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.556 [2024-04-24 10:30:22.533356] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:09.556 [2024-04-24 10:30:22.533405] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:19.530 00:35:19.530 filename0: (groupid=0, jobs=1): err= 0: pid=524660: Wed Apr 24 10:30:32 2024 00:35:19.530 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(326MiB/10032msec) 00:35:19.530 slat (nsec): min=6304, max=23889, avg=11670.04, stdev=2150.64 00:35:19.530 clat (usec): min=8636, max=92685, avg=11534.53, stdev=4629.44 00:35:19.530 lat (usec): min=8643, max=92698, avg=11546.20, stdev=4629.46 00:35:19.530 clat percentiles (usec): 00:35:19.530 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10421], 00:35:19.530 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:35:19.530 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12256], 95.00th=[12649], 00:35:19.530 | 99.00th=[50594], 99.50th=[52691], 99.90th=[53740], 99.95th=[54264], 00:35:19.530 | 99.99th=[92799] 00:35:19.530 bw ( KiB/s): min=27904, max=35328, per=31.75%, avg=33310.53, stdev=1981.06, samples=19 00:35:19.530 iops : min= 218, max= 276, avg=260.21, stdev=15.46, samples=19 00:35:19.530 lat (msec) : 10=8.02%, 20=90.87%, 50=0.04%, 100=1.07% 00:35:19.530 cpu : usr=95.20%, sys=4.49%, ctx=25, majf=0, minf=100 00:35:19.530 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.530 issued rwts: total=2606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.530 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:19.530 filename0: (groupid=0, jobs=1): err= 0: pid=524661: Wed Apr 24 10:30:32 2024 00:35:19.530 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(356MiB/10045msec) 00:35:19.530 slat (nsec): min=4023, max=25850, avg=11653.50, stdev=2050.84 00:35:19.530 clat (usec): min=6385, max=48662, avg=10545.76, stdev=1389.15 00:35:19.530 lat (usec): min=6398, max=48669, avg=10557.42, stdev=1389.15 00:35:19.530 clat percentiles (usec): 00:35:19.530 | 1.00th=[ 7373], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9896], 00:35:19.530 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:35:19.530 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:35:19.530 | 99.00th=[12518], 99.50th=[12780], 99.90th=[17957], 99.95th=[46924], 00:35:19.530 | 99.99th=[48497] 00:35:19.530 bw ( KiB/s): min=34560, max=38400, per=34.75%, avg=36454.40, stdev=880.56, samples=20 00:35:19.530 iops : min= 270, max= 300, avg=284.80, stdev= 6.88, samples=20 00:35:19.530 lat (msec) : 10=21.96%, 20=77.96%, 50=0.07% 00:35:19.530 cpu : usr=94.50%, sys=5.16%, ctx=18, majf=0, minf=117 00:35:19.530 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.530 issued rwts: total=2850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.530 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:19.530 filename0: (groupid=0, jobs=1): err= 0: pid=524662: Wed Apr 24 10:30:32 2024 00:35:19.530 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(347MiB/10004msec) 00:35:19.530 slat (nsec): min=6382, max=28240, avg=11604.32, stdev=2194.73 00:35:19.530 clat (usec): min=4987, max=15027, avg=10792.97, stdev=1019.71 00:35:19.530 lat (usec): min=4994, max=15051, avg=10804.57, stdev=1019.78 00:35:19.530 clat percentiles (usec): 00:35:19.530 | 1.00th=[ 7635], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10159], 00:35:19.530 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:35:19.530 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:35:19.530 | 99.00th=[13042], 99.50th=[13304], 99.90th=[15008], 99.95th=[15008], 00:35:19.530 | 99.99th=[15008] 00:35:19.530 bw ( KiB/s): min=34304, max=37888, per=33.87%, avg=35530.11, stdev=1043.65, samples=19 00:35:19.530 iops : min= 268, max= 296, avg=277.58, stdev= 8.15, samples=19 00:35:19.530 lat (msec) : 10=15.30%, 20=84.70% 00:35:19.530 cpu : usr=94.35%, sys=5.31%, ctx=18, majf=0, minf=160 00:35:19.530 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.530 issued rwts: total=2777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.530 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:19.530 00:35:19.530 Run status group 0 (all jobs): 00:35:19.530 READ: bw=102MiB/s (107MB/s), 32.5MiB/s-35.5MiB/s (34.0MB/s-37.2MB/s), io=1029MiB (1079MB), run=10004-10045msec 00:35:19.788 10:30:32 -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:19.788 10:30:32 -- target/dif.sh@43 -- # local sub 00:35:19.788 10:30:32 -- target/dif.sh@45 -- # for sub in "$@" 00:35:19.788 10:30:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:19.788 10:30:32 -- target/dif.sh@36 -- # local sub_id=0 00:35:19.788 10:30:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:19.788 10:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:19.788 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:35:19.788 10:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:19.788 10:30:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:19.788 10:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:19.789 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:35:19.789 10:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:19.789 00:35:19.789 real 0m11.184s 00:35:19.789 user 0m35.312s 00:35:19.789 sys 0m1.782s 00:35:19.789 10:30:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:19.789 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:35:19.789 ************************************ 00:35:19.789 END TEST fio_dif_digest 00:35:19.789 ************************************ 00:35:19.789 10:30:32 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:19.789 10:30:32 -- target/dif.sh@147 -- # nvmftestfini 00:35:19.789 10:30:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:19.789 10:30:32 -- nvmf/common.sh@116 -- # sync 00:35:19.789 10:30:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:19.789 10:30:32 -- nvmf/common.sh@119 -- # set +e 00:35:19.789 10:30:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:19.789 10:30:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:19.789 rmmod nvme_tcp 00:35:19.789 rmmod nvme_fabrics 00:35:19.789 rmmod nvme_keyring 00:35:19.789 10:30:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:19.789 10:30:33 -- nvmf/common.sh@123 -- # set -e 00:35:19.789 10:30:33 -- nvmf/common.sh@124 -- # return 0 00:35:19.789 10:30:33 -- nvmf/common.sh@477 -- # '[' -n 515407 ']' 00:35:19.789 10:30:33 -- nvmf/common.sh@478 -- # killprocess 515407 00:35:19.789 10:30:33 -- common/autotest_common.sh@926 -- # '[' -z 515407 ']' 00:35:19.789 10:30:33 -- common/autotest_common.sh@930 -- # kill -0 515407 00:35:19.789 10:30:33 -- common/autotest_common.sh@931 -- # uname 00:35:19.789 10:30:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:19.789 10:30:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 515407 00:35:19.789 10:30:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:19.789 10:30:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:19.789 10:30:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 515407' 00:35:19.789 killing process with pid 515407 00:35:19.789 10:30:33 -- common/autotest_common.sh@945 -- # kill 515407 00:35:19.789 10:30:33 -- common/autotest_common.sh@950 -- # wait 515407 00:35:20.048 10:30:33 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:20.048 10:30:33 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:22.579 Waiting for block devices as requested 00:35:22.579 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:22.579 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:22.579 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:22.579 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:22.837 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:22.837 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:22.837 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:22.837 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:23.094 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:23.094 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:23.094 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:23.094 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:23.352 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:23.352 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:23.352 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:23.611 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:23.611 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:23.611 10:30:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:23.611 10:30:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:23.611 10:30:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:23.611 10:30:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:23.611 10:30:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.611 10:30:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:23.611 10:30:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.145 10:30:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:26.145 00:35:26.145 real 1m12.746s 00:35:26.145 user 7m11.286s 00:35:26.145 sys 0m18.284s 00:35:26.145 10:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:26.145 10:30:38 -- common/autotest_common.sh@10 -- # set +x 00:35:26.145 ************************************ 00:35:26.145 END TEST nvmf_dif 00:35:26.145 ************************************ 00:35:26.145 10:30:38 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:26.145 10:30:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:26.145 10:30:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:26.145 10:30:38 -- common/autotest_common.sh@10 -- # set +x 00:35:26.145 ************************************ 00:35:26.145 START TEST nvmf_abort_qd_sizes 00:35:26.145 ************************************ 00:35:26.145 10:30:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:26.145 * Looking for test storage... 00:35:26.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:26.145 10:30:38 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.145 10:30:38 -- nvmf/common.sh@7 -- # uname -s 00:35:26.145 10:30:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.145 10:30:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.145 10:30:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.145 10:30:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.145 10:30:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.145 10:30:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.145 10:30:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.145 10:30:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.145 10:30:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.145 10:30:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.145 10:30:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:26.145 10:30:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:26.145 10:30:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.145 10:30:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.145 10:30:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.145 10:30:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.145 10:30:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.145 10:30:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.145 10:30:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.145 10:30:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.145 10:30:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.145 10:30:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.145 10:30:38 -- paths/export.sh@5 -- # export PATH 00:35:26.145 10:30:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.145 10:30:38 -- nvmf/common.sh@46 -- # : 0 00:35:26.145 10:30:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:26.145 10:30:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:26.145 10:30:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:26.145 10:30:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.145 10:30:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.145 10:30:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:26.145 10:30:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:26.145 10:30:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:26.145 10:30:38 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:35:26.145 10:30:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:26.145 10:30:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.145 10:30:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:26.145 10:30:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:26.145 10:30:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:26.145 10:30:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.145 10:30:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:26.145 10:30:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.145 10:30:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:35:26.145 10:30:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:26.145 10:30:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:26.145 10:30:39 -- common/autotest_common.sh@10 -- # set +x 00:35:31.414 10:30:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:35:31.414 10:30:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:35:31.414 10:30:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:35:31.414 10:30:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:35:31.414 10:30:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:35:31.414 10:30:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:35:31.414 10:30:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:35:31.414 10:30:43 -- nvmf/common.sh@294 -- # net_devs=() 00:35:31.414 10:30:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:35:31.414 10:30:43 -- nvmf/common.sh@295 -- # e810=() 00:35:31.414 10:30:43 -- nvmf/common.sh@295 -- # local -ga e810 00:35:31.414 10:30:43 -- nvmf/common.sh@296 -- # x722=() 00:35:31.414 10:30:43 -- nvmf/common.sh@296 -- # local -ga x722 00:35:31.414 10:30:43 -- nvmf/common.sh@297 -- # mlx=() 00:35:31.414 10:30:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:35:31.414 10:30:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:31.414 10:30:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:35:31.414 10:30:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:35:31.414 10:30:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:35:31.414 10:30:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:31.414 10:30:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:31.414 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:31.414 10:30:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:31.414 10:30:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:31.414 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:31.414 10:30:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:35:31.414 10:30:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:31.414 10:30:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.414 10:30:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:31.414 10:30:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.414 10:30:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:31.414 Found net devices under 0000:86:00.0: cvl_0_0 00:35:31.414 10:30:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.414 10:30:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:31.414 10:30:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.414 10:30:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:31.414 10:30:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.414 10:30:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:31.414 Found net devices under 0000:86:00.1: cvl_0_1 00:35:31.414 10:30:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.414 10:30:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:35:31.414 10:30:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:35:31.414 10:30:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:35:31.414 10:30:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:35:31.414 10:30:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.414 10:30:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.414 10:30:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:31.414 10:30:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:35:31.414 10:30:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:31.414 10:30:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:31.414 10:30:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:35:31.414 10:30:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:31.414 10:30:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.414 10:30:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:35:31.414 10:30:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:35:31.414 10:30:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:35:31.414 10:30:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:31.414 10:30:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:31.414 10:30:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:31.414 10:30:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:35:31.414 10:30:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:31.414 10:30:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:31.414 10:30:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:31.414 10:30:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:35:31.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:35:31.414 00:35:31.414 --- 10.0.0.2 ping statistics --- 00:35:31.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.414 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:35:31.414 10:30:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:31.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:35:31.414 00:35:31.414 --- 10.0.0.1 ping statistics --- 00:35:31.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.414 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:35:31.414 10:30:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.414 10:30:44 -- nvmf/common.sh@410 -- # return 0 00:35:31.414 10:30:44 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:35:31.414 10:30:44 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:33.317 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:33.317 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:34.254 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:34.254 10:30:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.254 10:30:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:34.254 10:30:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:34.254 10:30:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.254 10:30:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:34.254 10:30:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:34.254 10:30:47 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:35:34.254 10:30:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:34.254 10:30:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:34.254 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:35:34.254 10:30:47 -- nvmf/common.sh@469 -- # nvmfpid=532303 00:35:34.254 10:30:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:34.254 10:30:47 -- nvmf/common.sh@470 -- # waitforlisten 532303 00:35:34.254 10:30:47 -- common/autotest_common.sh@819 -- # '[' -z 532303 ']' 00:35:34.254 10:30:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.254 10:30:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:34.254 10:30:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.254 10:30:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:34.254 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:35:34.254 [2024-04-24 10:30:47.470570] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:35:34.254 [2024-04-24 10:30:47.470612] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.254 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.254 [2024-04-24 10:30:47.528454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:34.513 [2024-04-24 10:30:47.608947] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:34.513 [2024-04-24 10:30:47.609052] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.513 [2024-04-24 10:30:47.609060] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.513 [2024-04-24 10:30:47.609066] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.513 [2024-04-24 10:30:47.609116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.513 [2024-04-24 10:30:47.609215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:34.513 [2024-04-24 10:30:47.609231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:34.513 [2024-04-24 10:30:47.609232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.079 10:30:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:35.079 10:30:48 -- common/autotest_common.sh@852 -- # return 0 00:35:35.079 10:30:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:35.079 10:30:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:35.079 10:30:48 -- common/autotest_common.sh@10 -- # set +x 00:35:35.079 10:30:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.079 10:30:48 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:35.079 10:30:48 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:35:35.079 10:30:48 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:35:35.079 10:30:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:35:35.079 10:30:48 -- scripts/common.sh@312 -- # local nvmes 00:35:35.079 10:30:48 -- scripts/common.sh@314 -- # [[ -n 0000:5e:00.0 ]] 00:35:35.079 10:30:48 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:35.079 10:30:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:35.079 10:30:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:35.079 10:30:48 -- scripts/common.sh@322 -- # uname -s 00:35:35.079 10:30:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:35.079 10:30:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:35.079 10:30:48 -- scripts/common.sh@327 -- # (( 1 )) 00:35:35.079 10:30:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0 00:35:35.079 10:30:48 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:35:35.079 10:30:48 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:5e:00.0 00:35:35.079 10:30:48 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:35:35.079 10:30:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:35.079 10:30:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:35.079 10:30:48 -- common/autotest_common.sh@10 -- # set +x 00:35:35.079 ************************************ 00:35:35.079 START TEST spdk_target_abort 00:35:35.079 ************************************ 00:35:35.079 10:30:48 -- common/autotest_common.sh@1104 -- # spdk_target 00:35:35.079 10:30:48 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:35.079 10:30:48 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:35.079 10:30:48 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:35.079 10:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:35.079 10:30:48 -- common/autotest_common.sh@10 -- # set +x 00:35:38.363 spdk_targetn1 00:35:38.363 10:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:38.363 10:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:38.363 10:30:51 -- common/autotest_common.sh@10 -- # set +x 00:35:38.363 [2024-04-24 10:30:51.164742] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.363 10:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:35:38.363 10:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:38.363 10:30:51 -- common/autotest_common.sh@10 -- # set +x 00:35:38.363 10:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:35:38.363 10:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:38.363 10:30:51 -- common/autotest_common.sh@10 -- # set +x 00:35:38.363 10:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:35:38.363 10:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:38.363 10:30:51 -- common/autotest_common.sh@10 -- # set +x 00:35:38.363 [2024-04-24 10:30:51.197719] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.363 10:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:38.363 10:30:51 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:38.364 10:30:51 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:38.364 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.656 Initializing NVMe Controllers 00:35:41.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:41.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:41.656 Initialization complete. Launching workers. 00:35:41.656 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 12679, failed: 0 00:35:41.656 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1480, failed to submit 11199 00:35:41.656 success 863, unsuccess 617, failed 0 00:35:41.656 10:30:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:41.656 10:30:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:41.656 EAL: No free 2048 kB hugepages reported on node 1 00:35:44.997 [2024-04-24 10:30:57.612106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 [2024-04-24 10:30:57.612225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d1b30 is same with the state(5) to be set 00:35:44.997 Initializing NVMe Controllers 00:35:44.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:44.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:44.997 Initialization complete. Launching workers. 00:35:44.997 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8645, failed: 0 00:35:44.997 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1221, failed to submit 7424 00:35:44.997 success 344, unsuccess 877, failed 0 00:35:44.997 10:30:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:44.997 10:30:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:44.997 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.278 Initializing NVMe Controllers 00:35:48.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:48.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:48.278 Initialization complete. Launching workers. 00:35:48.278 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 38317, failed: 0 00:35:48.278 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2778, failed to submit 35539 00:35:48.278 success 555, unsuccess 2223, failed 0 00:35:48.278 10:31:00 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:48.278 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:48.278 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:35:48.278 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:48.278 10:31:00 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:48.278 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:48.278 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:35:49.214 10:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:49.214 10:31:02 -- target/abort_qd_sizes.sh@62 -- # killprocess 532303 00:35:49.214 10:31:02 -- common/autotest_common.sh@926 -- # '[' -z 532303 ']' 00:35:49.214 10:31:02 -- common/autotest_common.sh@930 -- # kill -0 532303 00:35:49.214 10:31:02 -- common/autotest_common.sh@931 -- # uname 00:35:49.214 10:31:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:49.214 10:31:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 532303 00:35:49.214 10:31:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:49.214 10:31:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:49.214 10:31:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 532303' 00:35:49.214 killing process with pid 532303 00:35:49.214 10:31:02 -- common/autotest_common.sh@945 -- # kill 532303 00:35:49.214 10:31:02 -- common/autotest_common.sh@950 -- # wait 532303 00:35:49.214 00:35:49.214 real 0m14.143s 00:35:49.214 user 0m56.130s 00:35:49.214 sys 0m2.321s 00:35:49.214 10:31:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:49.214 10:31:02 -- common/autotest_common.sh@10 -- # set +x 00:35:49.214 ************************************ 00:35:49.214 END TEST spdk_target_abort 00:35:49.214 ************************************ 00:35:49.473 10:31:02 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:35:49.473 10:31:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:49.473 10:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:49.473 10:31:02 -- common/autotest_common.sh@10 -- # set +x 00:35:49.473 ************************************ 00:35:49.473 START TEST kernel_target_abort 00:35:49.473 ************************************ 00:35:49.473 10:31:02 -- common/autotest_common.sh@1104 -- # kernel_target 00:35:49.473 10:31:02 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:35:49.473 10:31:02 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:35:49.473 10:31:02 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:35:49.473 10:31:02 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:35:49.473 10:31:02 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:35:49.473 10:31:02 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:49.473 10:31:02 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:49.473 10:31:02 -- nvmf/common.sh@627 -- # local block nvme 00:35:49.473 10:31:02 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:35:49.473 10:31:02 -- nvmf/common.sh@630 -- # modprobe nvmet 00:35:49.473 10:31:02 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:49.473 10:31:02 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:52.004 Waiting for block devices as requested 00:35:52.004 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:52.004 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:52.004 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:52.263 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:52.263 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:52.263 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:52.263 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:52.521 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:52.521 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:52.521 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:52.521 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:52.779 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:52.779 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:52.779 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:53.037 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:53.037 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:53.037 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:53.037 10:31:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:53.037 10:31:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:53.037 10:31:06 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:53.037 10:31:06 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:53.037 10:31:06 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:53.296 No valid GPT data, bailing 00:35:53.296 10:31:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:53.296 10:31:06 -- scripts/common.sh@393 -- # pt= 00:35:53.296 10:31:06 -- scripts/common.sh@394 -- # return 1 00:35:53.296 10:31:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:53.296 10:31:06 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:35:53.296 10:31:06 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:53.296 10:31:06 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:53.296 10:31:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:53.296 10:31:06 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:53.296 10:31:06 -- nvmf/common.sh@654 -- # echo 1 00:35:53.296 10:31:06 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:35:53.296 10:31:06 -- nvmf/common.sh@656 -- # echo 1 00:35:53.296 10:31:06 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:53.296 10:31:06 -- nvmf/common.sh@663 -- # echo tcp 00:35:53.296 10:31:06 -- nvmf/common.sh@664 -- # echo 4420 00:35:53.296 10:31:06 -- nvmf/common.sh@665 -- # echo ipv4 00:35:53.296 10:31:06 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:53.296 10:31:06 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:53.296 00:35:53.296 Discovery Log Number of Records 2, Generation counter 2 00:35:53.296 =====Discovery Log Entry 0====== 00:35:53.296 trtype: tcp 00:35:53.296 adrfam: ipv4 00:35:53.296 subtype: current discovery subsystem 00:35:53.296 treq: not specified, sq flow control disable supported 00:35:53.296 portid: 1 00:35:53.296 trsvcid: 4420 00:35:53.296 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:53.296 traddr: 10.0.0.1 00:35:53.296 eflags: none 00:35:53.296 sectype: none 00:35:53.296 =====Discovery Log Entry 1====== 00:35:53.296 trtype: tcp 00:35:53.296 adrfam: ipv4 00:35:53.296 subtype: nvme subsystem 00:35:53.296 treq: not specified, sq flow control disable supported 00:35:53.296 portid: 1 00:35:53.296 trsvcid: 4420 00:35:53.296 subnqn: kernel_target 00:35:53.296 traddr: 10.0.0.1 00:35:53.296 eflags: none 00:35:53.296 sectype: none 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:53.296 10:31:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:53.296 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.577 Initializing NVMe Controllers 00:35:56.577 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:56.577 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:56.577 Initialization complete. Launching workers. 00:35:56.577 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 70212, failed: 0 00:35:56.577 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 70212, failed to submit 0 00:35:56.577 success 0, unsuccess 70212, failed 0 00:35:56.577 10:31:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:56.577 10:31:09 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:56.577 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.861 Initializing NVMe Controllers 00:35:59.861 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:59.861 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:59.861 Initialization complete. Launching workers. 00:35:59.861 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 120815, failed: 0 00:35:59.861 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30446, failed to submit 90369 00:35:59.861 success 0, unsuccess 30446, failed 0 00:35:59.861 10:31:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:59.861 10:31:12 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:59.861 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.143 Initializing NVMe Controllers 00:36:03.143 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:03.143 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:03.143 Initialization complete. Launching workers. 00:36:03.143 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 116579, failed: 0 00:36:03.143 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29150, failed to submit 87429 00:36:03.143 success 0, unsuccess 29150, failed 0 00:36:03.143 10:31:15 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:36:03.143 10:31:15 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:36:03.143 10:31:15 -- nvmf/common.sh@677 -- # echo 0 00:36:03.143 10:31:15 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:36:03.143 10:31:15 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:03.143 10:31:15 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:03.143 10:31:15 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:36:03.143 10:31:15 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:36:03.143 10:31:15 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:36:03.143 00:36:03.143 real 0m13.251s 00:36:03.143 user 0m6.239s 00:36:03.143 sys 0m3.291s 00:36:03.143 10:31:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:03.143 10:31:15 -- common/autotest_common.sh@10 -- # set +x 00:36:03.143 ************************************ 00:36:03.143 END TEST kernel_target_abort 00:36:03.143 ************************************ 00:36:03.143 10:31:15 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:36:03.143 10:31:15 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:36:03.143 10:31:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:03.143 10:31:15 -- nvmf/common.sh@116 -- # sync 00:36:03.143 10:31:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:03.143 10:31:15 -- nvmf/common.sh@119 -- # set +e 00:36:03.143 10:31:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:03.143 10:31:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:03.143 rmmod nvme_tcp 00:36:03.143 rmmod nvme_fabrics 00:36:03.143 rmmod nvme_keyring 00:36:03.143 10:31:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:03.143 10:31:15 -- nvmf/common.sh@123 -- # set -e 00:36:03.143 10:31:15 -- nvmf/common.sh@124 -- # return 0 00:36:03.143 10:31:15 -- nvmf/common.sh@477 -- # '[' -n 532303 ']' 00:36:03.143 10:31:15 -- nvmf/common.sh@478 -- # killprocess 532303 00:36:03.143 10:31:15 -- common/autotest_common.sh@926 -- # '[' -z 532303 ']' 00:36:03.143 10:31:15 -- common/autotest_common.sh@930 -- # kill -0 532303 00:36:03.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (532303) - No such process 00:36:03.143 10:31:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 532303 is not found' 00:36:03.143 Process with pid 532303 is not found 00:36:03.143 10:31:15 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:36:03.143 10:31:15 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:05.675 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:05.675 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:05.675 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:05.675 10:31:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:05.675 10:31:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:05.675 10:31:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:05.675 10:31:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:05.675 10:31:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.675 10:31:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:05.675 10:31:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.579 10:31:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:36:07.579 00:36:07.579 real 0m41.932s 00:36:07.579 user 1m6.066s 00:36:07.579 sys 0m13.192s 00:36:07.579 10:31:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:07.579 10:31:20 -- common/autotest_common.sh@10 -- # set +x 00:36:07.579 ************************************ 00:36:07.579 END TEST nvmf_abort_qd_sizes 00:36:07.579 ************************************ 00:36:07.837 10:31:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:07.837 10:31:20 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:07.837 10:31:20 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:07.837 10:31:20 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:07.837 10:31:20 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:36:07.837 10:31:20 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:36:07.837 10:31:20 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:36:07.837 10:31:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:07.837 10:31:20 -- common/autotest_common.sh@10 -- # set +x 00:36:07.837 10:31:20 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:36:07.837 10:31:20 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:36:07.837 10:31:20 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:36:07.837 10:31:20 -- common/autotest_common.sh@10 -- # set +x 00:36:12.029 INFO: APP EXITING 00:36:12.029 INFO: killing all VMs 00:36:12.029 INFO: killing vhost app 00:36:12.029 INFO: EXIT DONE 00:36:13.933 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:13.933 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:13.933 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:13.933 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:13.933 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:13.933 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:13.933 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:13.933 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:13.933 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:14.192 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:14.192 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:14.192 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:14.192 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:14.192 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:14.192 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:14.193 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:14.193 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:17.483 Cleaning 00:36:17.483 Removing: /var/run/dpdk/spdk0/config 00:36:17.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:17.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:17.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:17.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:17.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:17.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:17.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:17.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:17.483 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:17.483 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:17.483 Removing: /var/run/dpdk/spdk1/config 00:36:17.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:17.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:17.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:17.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:17.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:17.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:17.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:17.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:17.483 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:17.483 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:17.483 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:17.483 Removing: /var/run/dpdk/spdk2/config 00:36:17.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:17.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:17.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:17.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:17.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:17.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:17.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:17.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:17.483 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:17.483 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:17.483 Removing: /var/run/dpdk/spdk3/config 00:36:17.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:17.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:17.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:17.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:17.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:17.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:17.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:17.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:17.483 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:17.483 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:17.483 Removing: /var/run/dpdk/spdk4/config 00:36:17.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:17.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:17.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:17.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:17.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:17.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:17.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:17.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:17.483 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:17.484 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:17.484 Removing: /dev/shm/bdev_svc_trace.1 00:36:17.484 Removing: /dev/shm/nvmf_trace.0 00:36:17.484 Removing: /dev/shm/spdk_tgt_trace.pid142930 00:36:17.484 Removing: /var/run/dpdk/spdk0 00:36:17.484 Removing: /var/run/dpdk/spdk1 00:36:17.484 Removing: /var/run/dpdk/spdk2 00:36:17.484 Removing: /var/run/dpdk/spdk3 00:36:17.484 Removing: /var/run/dpdk/spdk4 00:36:17.484 Removing: /var/run/dpdk/spdk_pid140640 00:36:17.484 Removing: /var/run/dpdk/spdk_pid141857 00:36:17.484 Removing: /var/run/dpdk/spdk_pid142930 00:36:17.484 Removing: /var/run/dpdk/spdk_pid143602 00:36:17.484 Removing: /var/run/dpdk/spdk_pid145130 00:36:17.484 Removing: /var/run/dpdk/spdk_pid146421 00:36:17.484 Removing: /var/run/dpdk/spdk_pid146699 00:36:17.484 Removing: /var/run/dpdk/spdk_pid146990 00:36:17.484 Removing: /var/run/dpdk/spdk_pid147292 00:36:17.484 Removing: /var/run/dpdk/spdk_pid147580 00:36:17.484 Removing: /var/run/dpdk/spdk_pid147837 00:36:17.484 Removing: /var/run/dpdk/spdk_pid148087 00:36:17.484 Removing: /var/run/dpdk/spdk_pid148360 00:36:17.484 Removing: /var/run/dpdk/spdk_pid149342 00:36:17.484 Removing: /var/run/dpdk/spdk_pid152331 00:36:17.484 Removing: /var/run/dpdk/spdk_pid152634 00:36:17.484 Removing: /var/run/dpdk/spdk_pid152898 00:36:17.484 Removing: /var/run/dpdk/spdk_pid152914 00:36:17.484 Removing: /var/run/dpdk/spdk_pid153411 00:36:17.484 Removing: /var/run/dpdk/spdk_pid153600 00:36:17.484 Removing: /var/run/dpdk/spdk_pid153919 00:36:17.484 Removing: /var/run/dpdk/spdk_pid154155 00:36:17.484 Removing: /var/run/dpdk/spdk_pid154416 00:36:17.484 Removing: /var/run/dpdk/spdk_pid154545 00:36:17.484 Removing: /var/run/dpdk/spdk_pid154694 00:36:17.484 Removing: /var/run/dpdk/spdk_pid154930 00:36:17.484 Removing: /var/run/dpdk/spdk_pid155420 00:36:17.484 Removing: /var/run/dpdk/spdk_pid155639 00:36:17.484 Removing: /var/run/dpdk/spdk_pid155952 00:36:17.484 Removing: /var/run/dpdk/spdk_pid156219 00:36:17.484 Removing: /var/run/dpdk/spdk_pid156319 00:36:17.484 Removing: /var/run/dpdk/spdk_pid156379 00:36:17.484 Removing: /var/run/dpdk/spdk_pid156619 00:36:17.484 Removing: /var/run/dpdk/spdk_pid156867 00:36:17.484 Removing: /var/run/dpdk/spdk_pid157104 00:36:17.484 Removing: /var/run/dpdk/spdk_pid157357 00:36:17.484 Removing: /var/run/dpdk/spdk_pid157591 00:36:17.484 Removing: /var/run/dpdk/spdk_pid157838 00:36:17.484 Removing: /var/run/dpdk/spdk_pid158081 00:36:17.484 Removing: /var/run/dpdk/spdk_pid158328 00:36:17.484 Removing: /var/run/dpdk/spdk_pid158564 00:36:17.484 Removing: /var/run/dpdk/spdk_pid158823 00:36:17.484 Removing: /var/run/dpdk/spdk_pid159058 00:36:17.484 Removing: /var/run/dpdk/spdk_pid159307 00:36:17.484 Removing: /var/run/dpdk/spdk_pid159545 00:36:17.484 Removing: /var/run/dpdk/spdk_pid159794 00:36:17.484 Removing: /var/run/dpdk/spdk_pid160029 00:36:17.484 Removing: /var/run/dpdk/spdk_pid160284 00:36:17.484 Removing: /var/run/dpdk/spdk_pid160516 00:36:17.484 Removing: /var/run/dpdk/spdk_pid160771 00:36:17.484 Removing: /var/run/dpdk/spdk_pid161010 00:36:17.484 Removing: /var/run/dpdk/spdk_pid161258 00:36:17.484 Removing: /var/run/dpdk/spdk_pid161498 00:36:17.484 Removing: /var/run/dpdk/spdk_pid161750 00:36:17.484 Removing: /var/run/dpdk/spdk_pid161984 00:36:17.484 Removing: /var/run/dpdk/spdk_pid162237 00:36:17.484 Removing: /var/run/dpdk/spdk_pid162471 00:36:17.484 Removing: /var/run/dpdk/spdk_pid162721 00:36:17.484 Removing: /var/run/dpdk/spdk_pid162961 00:36:17.484 Removing: /var/run/dpdk/spdk_pid163212 00:36:17.484 Removing: /var/run/dpdk/spdk_pid163449 00:36:17.484 Removing: /var/run/dpdk/spdk_pid163703 00:36:17.484 Removing: /var/run/dpdk/spdk_pid163935 00:36:17.484 Removing: /var/run/dpdk/spdk_pid164187 00:36:17.484 Removing: /var/run/dpdk/spdk_pid164431 00:36:17.484 Removing: /var/run/dpdk/spdk_pid164683 00:36:17.484 Removing: /var/run/dpdk/spdk_pid164918 00:36:17.484 Removing: /var/run/dpdk/spdk_pid165176 00:36:17.484 Removing: /var/run/dpdk/spdk_pid165415 00:36:17.484 Removing: /var/run/dpdk/spdk_pid165670 00:36:17.484 Removing: /var/run/dpdk/spdk_pid165903 00:36:17.484 Removing: /var/run/dpdk/spdk_pid166162 00:36:17.484 Removing: /var/run/dpdk/spdk_pid166435 00:36:17.484 Removing: /var/run/dpdk/spdk_pid166742 00:36:17.484 Removing: /var/run/dpdk/spdk_pid170409 00:36:17.484 Removing: /var/run/dpdk/spdk_pid252508 00:36:17.484 Removing: /var/run/dpdk/spdk_pid257178 00:36:17.484 Removing: /var/run/dpdk/spdk_pid266251 00:36:17.484 Removing: /var/run/dpdk/spdk_pid271474 00:36:17.484 Removing: /var/run/dpdk/spdk_pid275492 00:36:17.484 Removing: /var/run/dpdk/spdk_pid276015 00:36:17.484 Removing: /var/run/dpdk/spdk_pid284545 00:36:17.484 Removing: /var/run/dpdk/spdk_pid284805 00:36:17.484 Removing: /var/run/dpdk/spdk_pid289097 00:36:17.484 Removing: /var/run/dpdk/spdk_pid294892 00:36:17.484 Removing: /var/run/dpdk/spdk_pid297588 00:36:17.484 Removing: /var/run/dpdk/spdk_pid308422 00:36:17.484 Removing: /var/run/dpdk/spdk_pid317383 00:36:17.484 Removing: /var/run/dpdk/spdk_pid319097 00:36:17.484 Removing: /var/run/dpdk/spdk_pid320010 00:36:17.484 Removing: /var/run/dpdk/spdk_pid336697 00:36:17.484 Removing: /var/run/dpdk/spdk_pid340713 00:36:17.484 Removing: /var/run/dpdk/spdk_pid345028 00:36:17.484 Removing: /var/run/dpdk/spdk_pid346808 00:36:17.484 Removing: /var/run/dpdk/spdk_pid348750 00:36:17.484 Removing: /var/run/dpdk/spdk_pid348981 00:36:17.484 Removing: /var/run/dpdk/spdk_pid349139 00:36:17.484 Removing: /var/run/dpdk/spdk_pid349306 00:36:17.484 Removing: /var/run/dpdk/spdk_pid350003 00:36:17.484 Removing: /var/run/dpdk/spdk_pid352001 00:36:17.484 Removing: /var/run/dpdk/spdk_pid353398 00:36:17.484 Removing: /var/run/dpdk/spdk_pid353908 00:36:17.484 Removing: /var/run/dpdk/spdk_pid359371 00:36:17.484 Removing: /var/run/dpdk/spdk_pid365004 00:36:17.744 Removing: /var/run/dpdk/spdk_pid369883 00:36:17.744 Removing: /var/run/dpdk/spdk_pid406690 00:36:17.744 Removing: /var/run/dpdk/spdk_pid410580 00:36:17.744 Removing: /var/run/dpdk/spdk_pid416521 00:36:17.744 Removing: /var/run/dpdk/spdk_pid417911 00:36:17.744 Removing: /var/run/dpdk/spdk_pid4185500 00:36:17.744 Removing: /var/run/dpdk/spdk_pid419304 00:36:17.744 Removing: /var/run/dpdk/spdk_pid423613 00:36:17.744 Removing: /var/run/dpdk/spdk_pid427652 00:36:17.744 Removing: /var/run/dpdk/spdk_pid435043 00:36:17.744 Removing: /var/run/dpdk/spdk_pid435049 00:36:17.744 Removing: /var/run/dpdk/spdk_pid439858 00:36:17.744 Removing: /var/run/dpdk/spdk_pid440180 00:36:17.744 Removing: /var/run/dpdk/spdk_pid440493 00:36:17.744 Removing: /var/run/dpdk/spdk_pid441020 00:36:17.744 Removing: /var/run/dpdk/spdk_pid441025 00:36:17.744 Removing: /var/run/dpdk/spdk_pid442446 00:36:17.744 Removing: /var/run/dpdk/spdk_pid444259 00:36:17.744 Removing: /var/run/dpdk/spdk_pid445927 00:36:17.744 Removing: /var/run/dpdk/spdk_pid447556 00:36:17.744 Removing: /var/run/dpdk/spdk_pid449183 00:36:17.744 Removing: /var/run/dpdk/spdk_pid450825 00:36:17.744 Removing: /var/run/dpdk/spdk_pid456704 00:36:17.744 Removing: /var/run/dpdk/spdk_pid457274 00:36:17.744 Removing: /var/run/dpdk/spdk_pid458806 00:36:17.744 Removing: /var/run/dpdk/spdk_pid459534 00:36:17.744 Removing: /var/run/dpdk/spdk_pid465169 00:36:17.744 Removing: /var/run/dpdk/spdk_pid467974 00:36:17.744 Removing: /var/run/dpdk/spdk_pid473191 00:36:17.744 Removing: /var/run/dpdk/spdk_pid479134 00:36:17.744 Removing: /var/run/dpdk/spdk_pid485274 00:36:17.744 Removing: /var/run/dpdk/spdk_pid485971 00:36:17.744 Removing: /var/run/dpdk/spdk_pid486499 00:36:17.744 Removing: /var/run/dpdk/spdk_pid487177 00:36:17.744 Removing: /var/run/dpdk/spdk_pid488160 00:36:17.744 Removing: /var/run/dpdk/spdk_pid488786 00:36:17.744 Removing: /var/run/dpdk/spdk_pid489357 00:36:17.744 Removing: /var/run/dpdk/spdk_pid490068 00:36:17.744 Removing: /var/run/dpdk/spdk_pid494353 00:36:17.744 Removing: /var/run/dpdk/spdk_pid494593 00:36:17.744 Removing: /var/run/dpdk/spdk_pid500492 00:36:17.744 Removing: /var/run/dpdk/spdk_pid500761 00:36:17.744 Removing: /var/run/dpdk/spdk_pid503012 00:36:17.744 Removing: /var/run/dpdk/spdk_pid510608 00:36:17.744 Removing: /var/run/dpdk/spdk_pid510613 00:36:17.744 Removing: /var/run/dpdk/spdk_pid515680 00:36:17.744 Removing: /var/run/dpdk/spdk_pid517674 00:36:17.744 Removing: /var/run/dpdk/spdk_pid519666 00:36:17.744 Removing: /var/run/dpdk/spdk_pid520853 00:36:17.744 Removing: /var/run/dpdk/spdk_pid523258 00:36:17.744 Removing: /var/run/dpdk/spdk_pid524470 00:36:17.744 Removing: /var/run/dpdk/spdk_pid532944 00:36:17.744 Removing: /var/run/dpdk/spdk_pid533417 00:36:17.744 Removing: /var/run/dpdk/spdk_pid534074 00:36:17.744 Removing: /var/run/dpdk/spdk_pid536385 00:36:17.744 Removing: /var/run/dpdk/spdk_pid536880 00:36:17.744 Removing: /var/run/dpdk/spdk_pid537350 00:36:17.744 Clean 00:36:18.003 killing process with pid 95495 00:36:26.117 killing process with pid 95492 00:36:26.117 killing process with pid 95494 00:36:26.117 killing process with pid 95493 00:36:26.117 10:31:38 -- common/autotest_common.sh@1436 -- # return 0 00:36:26.117 10:31:38 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:36:26.117 10:31:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:26.117 10:31:38 -- common/autotest_common.sh@10 -- # set +x 00:36:26.117 10:31:38 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:36:26.117 10:31:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:26.117 10:31:38 -- common/autotest_common.sh@10 -- # set +x 00:36:26.117 10:31:38 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:26.117 10:31:38 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:26.117 10:31:38 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:26.117 10:31:38 -- spdk/autotest.sh@394 -- # hash lcov 00:36:26.117 10:31:38 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:26.117 10:31:38 -- spdk/autotest.sh@396 -- # hostname 00:36:26.117 10:31:38 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:26.117 geninfo: WARNING: invalid characters removed from testname! 00:36:44.235 10:31:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:46.137 10:31:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:47.510 10:32:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:49.441 10:32:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:50.812 10:32:04 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:52.712 10:32:05 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:54.087 10:32:07 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:54.087 10:32:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.346 10:32:07 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:54.346 10:32:07 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.346 10:32:07 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.346 10:32:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.346 10:32:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.346 10:32:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.346 10:32:07 -- paths/export.sh@5 -- $ export PATH 00:36:54.346 10:32:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.346 10:32:07 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:54.346 10:32:07 -- common/autobuild_common.sh@435 -- $ date +%s 00:36:54.346 10:32:07 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713947527.XXXXXX 00:36:54.346 10:32:07 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713947527.cOqhCp 00:36:54.346 10:32:07 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:36:54.346 10:32:07 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:36:54.346 10:32:07 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:36:54.346 10:32:07 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:54.346 10:32:07 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:54.346 10:32:07 -- common/autobuild_common.sh@451 -- $ get_config_params 00:36:54.346 10:32:07 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:36:54.346 10:32:07 -- common/autotest_common.sh@10 -- $ set +x 00:36:54.346 10:32:07 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:36:54.346 10:32:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:36:54.346 10:32:07 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:54.346 10:32:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:54.346 10:32:07 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:54.346 10:32:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:54.346 10:32:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:54.346 10:32:07 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:54.346 10:32:07 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:54.346 10:32:07 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:54.346 10:32:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:54.346 + [[ -n 53173 ]] 00:36:54.346 + sudo kill 53173 00:36:54.355 [Pipeline] } 00:36:54.374 [Pipeline] // stage 00:36:54.380 [Pipeline] } 00:36:54.398 [Pipeline] // timeout 00:36:54.403 [Pipeline] } 00:36:54.420 [Pipeline] // catchError 00:36:54.426 [Pipeline] } 00:36:54.444 [Pipeline] // wrap 00:36:54.450 [Pipeline] } 00:36:54.466 [Pipeline] // catchError 00:36:54.475 [Pipeline] stage 00:36:54.477 [Pipeline] { (Epilogue) 00:36:54.492 [Pipeline] catchError 00:36:54.494 [Pipeline] { 00:36:54.508 [Pipeline] echo 00:36:54.510 Cleanup processes 00:36:54.516 [Pipeline] sh 00:36:54.798 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:54.798 550216 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:54.812 [Pipeline] sh 00:36:55.093 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:55.093 ++ grep -v 'sudo pgrep' 00:36:55.093 ++ awk '{print $1}' 00:36:55.093 + sudo kill -9 00:36:55.093 + true 00:36:55.105 [Pipeline] sh 00:36:55.385 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:07.597 [Pipeline] sh 00:37:07.877 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:07.877 Artifacts sizes are good 00:37:07.889 [Pipeline] archiveArtifacts 00:37:07.896 Archiving artifacts 00:37:08.099 [Pipeline] sh 00:37:08.400 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:08.413 [Pipeline] cleanWs 00:37:08.422 [WS-CLEANUP] Deleting project workspace... 00:37:08.422 [WS-CLEANUP] Deferred wipeout is used... 00:37:08.428 [WS-CLEANUP] done 00:37:08.430 [Pipeline] } 00:37:08.450 [Pipeline] // catchError 00:37:08.461 [Pipeline] sh 00:37:08.739 + logger -p user.info -t JENKINS-CI 00:37:08.747 [Pipeline] } 00:37:08.763 [Pipeline] // stage 00:37:08.768 [Pipeline] } 00:37:08.783 [Pipeline] // node 00:37:08.789 [Pipeline] End of Pipeline 00:37:08.824 Finished: SUCCESS